Redefining "hallucination" In Llms: Towards A Psychology-informed Framework For Mitigating Misinformation · The Large Language Model Bible Contribute to LLM-Bible

Redefining "hallucination" In Llms: Towards A Psychology-informed Framework For Mitigating Misinformation

Berberette Elijah, Hutchins Jack, Sadovnik Amir. Arxiv 2024

[Paper]    
Ethics And Bias GPT Model Architecture RAG Tools

In recent years, large language models (LLMs) have become incredibly popular, with ChatGPT for example being used by over a billion users. While these models exhibit remarkable language understanding and logical prowess, a notable challenge surfaces in the form of “hallucinations.” This phenomenon results in LLMs outputting misinformation in a confident manner, which can lead to devastating consequences with such a large user base. However, we question the appropriateness of the term “hallucination” in LLMs, proposing a psychological taxonomy based on cognitive biases and other psychological phenomena. Our approach offers a more fine-grained understanding of this phenomenon, allowing for targeted solutions. By leveraging insights from how humans internally resolve similar challenges, we aim to develop strategies to mitigate LLM hallucinations. This interdisciplinary approach seeks to move beyond conventional terminology, providing a nuanced understanding and actionable pathways for improvement in LLM reliability.

Similar Work