The Pitfalls Of Defining Hallucination · The Large Language Model Bible Contribute to LLM-Bible

The Pitfalls Of Defining Hallucination

Van Deemter Kees. Arxiv 2024

[Paper]    

Despite impressive advances in Natural Language Generation (NLG) and Large Language Models (LLMs), researchers are still unclear about important aspects of NLG evaluation. To substantiate this claim, I examine current classifications of hallucination and omission in Data-text NLG, and I propose a logic-based synthesis of these classfications. I conclude by highlighting some remaining limitations of all current thinking about hallucination and by discussing implications for LLMs.

Similar Work