Towards Detecting Llms Hallucination Via Markov Chain-based Multi-agent Debate Framework · The Large Language Model Bible Contribute to LLM-Bible

Towards Detecting Llms Hallucination Via Markov Chain-based Multi-agent Debate Framework

Sun Xiaoxi, Li Jinpeng, Zhong Yan, Zhao Dongyan, Yan Rui. Arxiv 2024

[Paper]    
Agentic Applications Language Modeling Merging Tools Training Techniques

The advent of large language models (LLMs) has facilitated the development of natural language text generation. It also poses unprecedented challenges, with content hallucination emerging as a significant concern. Existing solutions often involve expensive and complex interventions during the training process. Moreover, some approaches emphasize problem disassembly while neglecting the crucial validation process, leading to performance degradation or limited applications. To overcome these limitations, we propose a Markov Chain-based multi-agent debate verification framework to enhance hallucination detection accuracy in concise claims. Our method integrates the fact-checking process, including claim detection, evidence retrieval, and multi-agent verification. In the verification stage, we deploy multiple agents through flexible Markov Chain-based debates to validate individual claims, ensuring meticulous verification outcomes. Experimental results across three generative tasks demonstrate that our approach achieves significant improvements over baselines.

Similar Work