Hallucination Detection And Hallucination Mitigation: An Investigation · The Large Language Model Bible Contribute to LLM-Bible

Hallucination Detection And Hallucination Mitigation: An Investigation

Luo Junliang, Li Tianyu, Wu Di, Jenkin Michael, Liu Steve, Dudek Gregory. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Reinforcement Learning Survey Paper

Large language models (LLMs), including ChatGPT, Bard, and Llama, have achieved remarkable successes over the last two years in a range of different applications. In spite of these successes, there exist concerns that limit the wide application of LLMs. A key problem is the problem of hallucination. Hallucination refers to the fact that in addition to correct responses, LLMs can also generate seemingly correct but factually incorrect responses. This report aims to present a comprehensive review of the current literature on both hallucination detection and hallucination mitigation. We hope that this report can serve as a good reference for both engineers and researchers who are interested in LLMs and applying them to real world tasks.

Similar Work