Autohall: Automated Hallucination Dataset Generation For Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Autohall: Automated Hallucination Dataset Generation For Large Language Models

Cao Zouying, Yang Yifei, Zhao Hai. Arxiv 2023

[Paper]    
Applications

While Large language models (LLMs) have garnered widespread applications across various domains due to their powerful language understanding and generation capabilities, the detection of non-factual or hallucinatory content generated by LLMs remains scarce. Currently, one significant challenge in hallucination detection is the laborious task of time-consuming and expensive manual annotation of the hallucinatory generation. To address this issue, this paper first introduces a method for automatically constructing model-specific hallucination datasets based on existing fact-checking datasets called AutoHall. Furthermore, we propose a zero-resource and black-box hallucination detection method based on self-contradiction. We conduct experiments towards prevalent open-/closed-source LLMs, achieving superior hallucination detection performance compared to extant baselines. Moreover, our experiments reveal variations in hallucination proportions and types among different models.

Similar Work