How Well Can Llms Echo Us? Evaluating AI Chatbots' Role-play Ability With ECHO · The Large Language Model Bible Contribute to LLM-Bible

How Well Can Llms Echo Us? Evaluating AI Chatbots' Role-play Ability With ECHO

Ng Man Tik, Tse Hui Tung, Huang Jen-tse, Li Jingjing, Wang Wenxuan, Lyu Michael R.. Arxiv 2024

[Paper] [Code]    
GPT Has Code Model Architecture RAG Reinforcement Learning Tools

The role-play ability of Large Language Models (LLMs) has emerged as a popular research direction. However, existing studies focus on imitating well-known public figures or fictional characters, overlooking the potential for simulating ordinary individuals. Such an oversight limits the potential for advancements in digital human clones and non-player characters in video games. To bridge this gap, we introduce ECHO, an evaluative framework inspired by the Turing test. This framework engages the acquaintances of the target individuals to distinguish between human and machine-generated responses. Notably, our framework focuses on emulating average individuals rather than historical or fictional figures, presenting a unique advantage to apply the Turing Test. We evaluated three role-playing LLMs using ECHO, with GPT-3.5 and GPT-4 serving as foundational models, alongside the online application GPTs from OpenAI. Our results demonstrate that GPT-4 more effectively deceives human evaluators, and GPTs achieves a leading success rate of 48.3%. Furthermore, we investigated whether LLMs could discern between human-generated and machine-generated texts. While GPT-4 can identify differences, it could not determine which texts were human-produced. Our code and results of reproducing the role-playing LLMs are made publicly available via https://github.com/CUHK-ARISE/ECHO.

Similar Work