A Pre-training Based Personalized Dialogue Generation Model With Persona-sparse Data · The Large Language Model Bible Contribute to LLM-Bible

A Pre-training Based Personalized Dialogue Generation Model With Persona-sparse Data

Zheng Yinhe, Zhang Rongsheng, Mao Xiaoxi, Huang Minlie. Arxiv 2019

[Paper]    
Applications Attention Mechanism Model Architecture Training Techniques

Endowing dialogue systems with personas is essential to deliver more human-like conversations. However, this problem is still far from well explored due to the difficulties of both embodying personalities in natural languages and the persona sparsity issue observed in most dialogue corpora. This paper proposes a pre-training based personalized dialogue model that can generate coherent responses using persona-sparse dialogue data. In this method, a pre-trained language model is used to initialize an encoder and decoder, and personal attribute embeddings are devised to model richer dialogue contexts by encoding speakers’ personas together with dialogue histories. Further, to incorporate the target persona in the decoding process and to balance its contribution, an attention routing structure is devised in the decoder to merge features extracted from the target persona and dialogue contexts using dynamically predicted weights. Our model can utilize persona-sparse dialogues in a unified manner during the training process, and can also control the amount of persona-related features to exhibit during the inference process. Both automatic and manual evaluation demonstrates that the proposed model outperforms state-of-the-art methods for generating more coherent and persona consistent responses with persona-sparse data.

Similar Work