Towards Joint Modeling Of Dialogue Response And Speech Synthesis Based On Large Language Model · The Large Language Model Bible Contribute to LLM-Bible

Towards Joint Modeling Of Dialogue Response And Speech Synthesis Based On Large Language Model

Zhou Xinyu, Chen Delong, Chen Yudong. Arxiv 2023

[Paper]    
Applications

This paper explores the potential of constructing an AI spoken dialogue system that “thinks how to respond” and “thinks how to speak” simultaneously, which more closely aligns with the human speech production process compared to the current cascade pipeline of independent chatbot and Text-to-Speech (TTS) modules. We hypothesize that Large Language Models (LLMs) with billions of parameters possess significant speech understanding capabilities and can jointly model dialogue responses and linguistic features. We conduct two sets of experiments: 1) Prosodic structure prediction, a typical front-end task in TTS, demonstrating the speech understanding ability of LLMs, and 2) Further integrating dialogue response and a wide array of linguistic features using a unified encoding format. Our results indicate that the LLM-based approach is a promising direction for building unified spoken dialogue systems.

Similar Work