BLSP: Bootstrapping Language-speech Pre-training Via Behavior Alignment Of Continuation Writing · The Large Language Model Bible Contribute to LLM-Bible

BLSP: Bootstrapping Language-speech Pre-training Via Behavior Alignment Of Continuation Writing

Wang Chen, Liao Minpeng, Huang Zhongqiang, Lu Jinliang, Wu Junhong, Liu Yuchen, Zong Chengqing, Zhang Jiajun. Arxiv 2023

[Paper]    
Prompting Training Techniques

The emergence of large language models (LLMs) has sparked significant interest in extending their remarkable language capabilities to speech. However, modality alignment between speech and text still remains an open problem. Current solutions can be categorized into two strategies. One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text. The other is an end-to-end approach that relies on speech instruction data, which is very difficult to collect in large quantities. In this paper, we address these issues and propose the BLSP approach that Bootstraps Language-Speech Pre-training via behavior alignment of continuation writing. We achieve this by learning a lightweight modality adapter between a frozen speech encoder and an LLM, ensuring that the LLM exhibits the same generation behavior regardless of the modality of input: a speech segment or its transcript. The training process can be divided into two steps. The first step prompts an LLM to generate texts with speech transcripts as prefixes, obtaining text continuations. In the second step, these continuations are used as supervised signals to train the modality adapter in an end-to-end manner. We demonstrate that this straightforward process can extend the capabilities of LLMs to speech, enabling speech recognition, speech translation, spoken language understanding, and speech conversation, even in zero-shot cross-lingual scenarios.

Similar Work