BLSP-KD: Bootstrapping Language-speech Pre-training Via Knowledge Distillation · The Large Language Model Bible Contribute to LLM-Bible

BLSP-KD: Bootstrapping Language-speech Pre-training Via Knowledge Distillation

Wang Chen, Liao Minpeng, Huang Zhongqiang, Zhang Jiajun. Arxiv 2024

[Paper]    
Distillation Efficiency And Optimization Fine Tuning Pretraining Methods Reinforcement Learning Training Techniques

Recent end-to-end approaches have shown promise in extending large language models (LLMs) to speech inputs, but face limitations in directly assessing and optimizing alignment quality and fail to achieve fine-grained alignment due to speech-text length mismatch. We introduce BLSP-KD, a novel approach for Bootstrapping Language-Speech Pretraining via Knowledge Distillation, which addresses these limitations through two key techniques. First, it optimizes speech-text alignment by minimizing the divergence between the LLM’s next-token prediction distributions for speech and text inputs using knowledge distillation. Second, it employs a continuous-integrate-andfire strategy to segment speech into tokens that correspond one-to-one with text tokens, enabling fine-grained alignment. We also introduce Partial LoRA (PLoRA), a new adaptation method supporting LLM finetuning for speech inputs under knowledge distillation. Quantitative evaluation shows that BLSP-KD outperforms previous end-to-end baselines and cascaded systems with comparable scale of parameters, facilitating general instruction-following capabilities for LLMs with speech inputs. This approach provides new possibilities for extending LLMs to spoken language interactions.

Similar Work