Cognitive Mirage: A Review Of Hallucinations In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Cognitive Mirage: A Review Of Hallucinations In Large Language Models

Ye Hongbin, Liu Tong, Zhang Aijia, Hua Wei, Jia Weiqiang. Arxiv 2023

[Paper]    
Applications Attention Mechanism Language Modeling Model Architecture RAG

As large language models continue to develop in the field of AI, text generation systems are susceptible to a worrisome phenomenon known as hallucination. In this study, we summarize recent compelling insights into hallucinations in LLMs. We present a novel taxonomy of hallucinations from various text generation tasks, thus provide theoretical insights, detection methods and improvement approaches. Based on this, future research directions are proposed. Our contribution are threefold: (1) We provide a detailed and complete taxonomy for hallucinations appearing in text generation tasks; (2) We provide theoretical analyses of hallucinations in LLMs and provide existing detection and improvement methods; (3) We propose several research directions that can be developed in the future. As hallucinations garner significant attention from the community, we will maintain updates on relevant research progress.

Similar Work