Cohesive Conversations: Enhancing Authenticity In Multi-agent Simulated Dialogues · The Large Language Model Bible Contribute to LLM-Bible

Cohesive Conversations: Enhancing Authenticity In Multi-agent Simulated Dialogues

Chu Kuanchao, Chen Yi-pei, Nakayama Hideki. Arxiv 2024

[Paper]    
Agentic Tools

This paper investigates the quality of multi-agent dialogues in simulations powered by Large Language Models (LLMs). Analyzing dialogues and memory over multiple sessions revealed significant issues such as repetition, inconsistency, and hallucination, exacerbated by the propagation of erroneous information. To combat these challenges, we propose a novel Screening, Diagnosis, and Regeneration (SDR) framework that detects and corrects utterance errors through a comprehensive process involving immediate issue identification, evidence gathering from past dialogues, and LLM analysis for utterance revision. By incorporating our SDR framework to Generative Agents (Park et al., 2023), we enhance the diversity, consistency, and factualness of the generated dialogues. This work presents a pioneering approach to enhancing dialogue quality in multi-agent simulations, establishing a new standard for future research in the field.

Similar Work