Capturing Minds, Not Just Words: Enhancing Role-playing Language Models With Personality-indicative Data · The Large Language Model Bible Contribute to LLM-Bible

Capturing Minds, Not Just Words: Enhancing Role-playing Language Models With Personality-indicative Data

Ran Yiting, Wang Xintao, Xu Rui, Yuan Xinfeng, Liang Jiaqing, Xiao Yanghua, Yang Deqing. Arxiv 2024

[Paper] [Code]    
Agentic Has Code RAG Reinforcement Learning

Role-playing agents (RPA) have been a popular application area for large language models (LLMs), attracting significant interest from both industry and academia.While existing RPAs well portray the characters’ knowledge and tones, they face challenges in capturing their minds, especially for small role-playing language models (RPLMs). In this paper, we propose to enhance RPLMs via personality-indicative data. Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters. Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations. Code and data are available at \href{https://github.com/alienet1109/RolePersonality}{this URL}.

Similar Work