Mitigating Hallucination In Fictional Character Role-play · The Large Language Model Bible Contribute to LLM-Bible

Mitigating Hallucination In Fictional Character Role-play

Sadeq Nafis, Xie Zhouhang, Kang Byungkyu, Lamba Prarit, Gao Xiang, Mcauley Julian. Arxiv 2024

[Paper] [Code]    
Agentic Applications Has Code Reinforcement Learning Security

Role-playing has wide-ranging applications in customer support, embodied agents, computational social science, etc. The influence of parametric world knowledge of large language models (LLMs) often causes role-playing characters to act out of character and hallucinate about things outside the scope of their knowledge. In this work, we focus on the evaluation and mitigation of hallucination in fictional character role-play. We introduce a dataset with more than 2,000 characters and 72,000 interviews, including 18,000 adversarial questions. We propose RoleFact, a role-playing method that mitigates hallucination by modulating the influence of parametric knowledge using a pre-calibrated confidence threshold. Experiments show that the proposed method improves the factual precision of generated responses by 18% for adversarial questions with a 44% reduction in temporal hallucination for time-sensitive interviews. The code and the dataset will be available at https://github.com/NafisSadeq/rolefact.git.

Similar Work