Cognitive Mirage: A Review Of Hallucinations In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Cognitive Mirage: A Review Of Hallucinations In Large Language Models

Hongbin Ye, Tong Liu, Aijia Zhang, Wei Hua, Weiqiang Jia. Arxiv 2023 – 37 citations

[Paper]    
RAG Attention Mechanism Model Architecture Language Modeling

As large language models continue to develop in the field of AI, text generation systems are susceptible to a worrisome phenomenon known as hallucination. In this study, we summarize recent compelling insights into hallucinations in LLMs. We present a novel taxonomy of hallucinations from various text generation tasks, thus provide theoretical insights, detection methods and improvement approaches. Based on this, future research directions are proposed. Our contribution are threefold: (1) We provide a detailed and complete taxonomy for hallucinations appearing in text generation tasks; (2) We provide theoretical analyses of hallucinations in LLMs and provide existing detection and improvement methods; (3) We propose several research directions that can be developed in the future. As hallucinations garner significant attention from the community, we will maintain updates on relevant research progress.

Similar Work