MINDECHO: Role-playing Language Agents For Key Opinion Leaders · The Large Language Model Bible Contribute to LLM-Bible

MINDECHO: Role-playing Language Agents For Key Opinion Leaders

Xu Rui, Lu Dakuan, Tan Xiaoyu, Wang Xintao, Yuan Siyu, Chen Jiangjie, Chu Wei, Yinghui Xu. Arxiv 2024

[Paper]    
Agentic Applications GPT Model Architecture RAG Tools Training Techniques

Large language models~(LLMs) have demonstrated impressive performance in various applications, among which role-playing language agents (RPLAs) have engaged a broad user base. Now, there is a growing demand for RPLAs that represent Key Opinion Leaders (KOLs), \ie, Internet celebrities who shape the trends and opinions in their domains. However, research in this line remains underexplored. In this paper, we hence introduce MINDECHO, a comprehensive framework for the development and evaluation of KOL RPLAs. MINDECHO collects KOL data from Internet video transcripts in various professional fields, and synthesizes their conversations leveraging GPT-4. Then, the conversations and the transcripts are used for individualized model training and inference-time retrieval, respectively. Our evaluation covers both general dimensions (\ie, knowledge and tones) and fan-centric dimensions for KOLs. Extensive experiments validate the effectiveness of MINDECHO in developing and evaluating KOL RPLAs.

Similar Work