"confidently Nonsensical?'': A Critical Survey On The Perspectives And Challenges Of 'hallucinations' In NLP · The Large Language Model Bible Contribute to LLM-Bible

"confidently Nonsensical?'': A Critical Survey On The Perspectives And Challenges Of 'hallucinations' In NLP

Venkit Pranav Narayanan, Chakravorti Tatiana, Gupta Vipul, Biggs Heidi, Srinath Mukund, Goswami Koustava, Rajtmajer Sarah, Wilson Shomir. Arxiv 2024

[Paper]    
Survey Paper Tools

We investigate how hallucination in large language models (LLM) is characterized in peer-reviewed literature using a critical examination of 103 publications across NLP research. Through a comprehensive review of sociological and technological literature, we identify a lack of agreement with the term `hallucination.’ Additionally, we conduct a survey with 171 practitioners from the field of NLP and AI to capture varying perspectives on hallucination. Our analysis underscores the necessity for explicit definitions and frameworks outlining hallucination within NLP, highlighting potential challenges, and our survey inputs provide a thematic understanding of the influence and ramifications of hallucination in society.

Similar Work