COSMIC: Data Efficient Instruction-tuning For Speech In-context Learning · The Large Language Model Bible Contribute to LLM-Bible

COSMIC: Data Efficient Instruction-tuning For Speech In-context Learning

Pan Jing, Wu Jian, Gaur Yashesh, Sivasankaran Sunit, Chen Zhuo, Liu Shujie, Li Jinyu. Arxiv 2023

[Paper]    
Ethics And Bias Fine Tuning GPT In Context Learning Merging Model Architecture Prompting RAG

We present a cost-effective method to integrate speech into a large language model (LLM), resulting in a Contextual Speech Model with Instruction-following/in-context-learning Capabilities (COSMIC) multi-modal LLM. Using GPT-3.5, we generate Speech Comprehension Test Question-Answer (SQA) pairs from speech transcriptions for supervised instruction tuning. With under 30 million trainable parameters and only 450 hours of English speech data, COSMIC demonstrates emerging capabilities in instruction-following and in-context learning. Equipped with such capabilities, COSMIC achieves a maximum 33.18 BLEU score in 0-shot EN-to-X speech to text translation (S2TT) and a significant boost in the 1-shot setting. Additionally, there is an average 25.8% relative Word Error Rate (WER) reduction for 1-shot cross-domain adaptation. COSMIC exhibits a significant automatic speech recognition (ASR) accuracy gain in contextual biasing tasks due to its instruction-following capability.

Similar Work