"in Dialogues We Learn": Towards Personalized Dialogue Without Pre-defined Profiles Through In-dialogue Learning · The Large Language Model Bible Contribute to LLM-Bible

"in Dialogues We Learn": Towards Personalized Dialogue Without Pre-defined Profiles Through In-dialogue Learning

Cheng Chuanqi, Tu Quan, Wu Wei, Shang Shuo, Mao Cunli, Yu Zhengtao, Yan Rui. Arxiv 2024

[Paper]    
Applications Attention Mechanism Fine Tuning Model Architecture Pretraining Methods RAG Tools Training Techniques

Personalized dialogue systems have gained significant attention in recent years for their ability to generate responses in alignment with different personas. However, most existing approaches rely on pre-defined personal profiles, which are not only time-consuming and labor-intensive to create but also lack flexibility. We propose In-Dialogue Learning (IDL), a fine-tuning framework that enhances the ability of pre-trained large language models to leverage dialogue history to characterize persona for completing personalized dialogue generation tasks without pre-defined profiles. Our experiments on three datasets demonstrate that IDL brings substantial improvements, with BLEU and ROUGE scores increasing by up to 200% and 247%, respectively. Additionally, the results of human evaluations further validate the efficacy of our proposed method.

Similar Work