MCP: Self-supervised Pre-training For Personalized Chatbots With Multi-level Contrastive Sampling · The Large Language Model Bible Contribute to LLM-Bible

MCP: Self-supervised Pre-training For Personalized Chatbots With Multi-level Contrastive Sampling

Huang Zhaoheng, Dou Zhicheng, Zhu Yutao, Ma Zhengyi. EMNLP 2022

[Paper]    
Merging Pretraining Methods RAG Reinforcement Learning Tools Training Techniques

Personalized chatbots focus on endowing the chatbots with a consistent personality to behave like real users and further act as personal assistants. Previous studies have explored generating implicit user profiles from the user’s dialogue history for building personalized chatbots. However, these studies only use the response generation loss to train the entire model, thus it is prone to suffer from the problem of data sparsity. Besides, they overemphasize the final generated response’s quality while ignoring the correlations and fusions between the user’s dialogue history, leading to rough data representations and performance degradation. To tackle these problems, we propose a self-supervised learning framework MCP for capturing better representations from users’ dialogue history for personalized chatbots. Specifically, we apply contrastive sampling methods to leverage the supervised signals hidden in user dialog history, and generate the pre-training samples for enhancing the model. We design three pre-training tasks based on three types of contrastive pairs from user dialogue history, namely response pairs, sequence augmentation pairs, and user pairs. We pre-train the utterance encoder and the history encoder towards the contrastive objectives and use these pre-trained encoders for generating user profiles while personalized response generation. Experimental results on two real-world datasets show a significant improvement in our proposed model MCP compared with the existing methods.

Similar Work