Desta: Enhancing Speech Language Models Through Descriptive Speech-text Alignment · The Large Language Model Bible Contribute to LLM-Bible

Desta: Enhancing Speech Language Models Through Descriptive Speech-text Alignment

Lu Ke-han, Chen Zhehuai, Fu Szu-wei, Huang He, Ginsburg Boris, Wang Yu-chiang Frank, Lee Hung-yi. Arxiv 2024

[Paper]    
RAG Reinforcement Learning

Recent speech language models (SLMs) typically incorporate pre-trained speech models to extend the capabilities from large language models (LLMs). In this paper, we propose a Descriptive Speech-Text Alignment approach that leverages speech captioning to bridge the gap between speech and text modalities, enabling SLMs to interpret and generate comprehensive natural language descriptions, thereby facilitating the capability to understand both linguistic and non-linguistic features in speech. Enhanced with the proposed approach, our model demonstrates superior performance on the Dynamic-SUPERB benchmark, particularly in generalizing to unseen tasks. Moreover, we discover that the aligned model exhibits a zero-shot instruction-following capability without explicit speech instruction tuning. These findings highlight the potential to reshape instruction-following SLMs by incorporating rich, descriptive speech captions.

Similar Work