Investigating Decoder-only Large Language Models For Speech-to-text Translation · The Large Language Model Bible Contribute to LLM-Bible

Investigating Decoder-only Large Language Models For Speech-to-text Translation

Huang Chao-wei, Lu Hui, Gong Hongyu, Inaguma Hirofumi, Kulikov Ilia, Mavlyutov Ruslan, Popuri Sravya. Arxiv 2024

[Paper]    
Fine Tuning Model Architecture Pretraining Methods Training Techniques

Large language models (LLMs), known for their exceptional reasoning capabilities, generalizability, and fluency across diverse domains, present a promising avenue for enhancing speech-related tasks. In this paper, we focus on integrating decoder-only LLMs to the task of speech-to-text translation (S2TT). We propose a decoder-only architecture that enables the LLM to directly consume the encoded speech representation and generate the text translation. Additionally, we investigate the effects of different parameter-efficient fine-tuning techniques and task formulation. Our model achieves state-of-the-art performance on CoVoST 2 and FLEURS among models trained without proprietary data. We also conduct analyses to validate the design choices of our proposed model and bring insights to the integration of LLMs to S2TT.

Similar Work