A Cause-effect Look At Alleviating Hallucination Of Knowledge-grounded Dialogue Generation · The Large Language Model Bible Contribute to LLM-Bible

A Cause-effect Look At Alleviating Hallucination Of Knowledge-grounded Dialogue Generation

Yu Jifan, Zhang Xiaohan, Xu Yifan, Lei Xuanyu, Yao Zijun, Zhang Jing, Hou Lei, Li Juanzi. Arxiv 2024

[Paper]    
Applications Attention Mechanism Model Architecture Reinforcement Learning

Empowered by the large-scale pretrained language models, existing dialogue systems have demonstrated impressive performance conducting fluent and natural-sounding conversations. However, they are still plagued by the hallucination problem, causing unpredictable factual errors in the generated responses. Recently, knowledge-grounded dialogue generation models, that intentionally invoke external knowledge resources to more informative responses, are also proven to be effective in reducing hallucination. Following the idea of getting high-quality knowledge, a few efforts have achieved pretty good performance on this issue. As some inevitable knowledge noises may also lead to hallucinations, it is emergent to investigate the reason and future directions for building noise-tolerant methods in KGD tasks. In this paper, we analyze the causal story behind this problem with counterfactual reasoning methods. Based on the causal effect analysis, we propose a possible solution for alleviating the hallucination in KGD by exploiting the dialogue-knowledge interaction. Experimental results of our example implementation show that this method can reduce hallucination without disrupting other dialogue performance, while keeping adaptive to different generation models. We hope our efforts can support and call for more attention to developing lightweight techniques towards robust and trusty dialogue systems.

Similar Work