On Early Detection Of Hallucinations In Factual Question Answering · The Large Language Model Bible Contribute to LLM-Bible

On Early Detection Of Hallucinations In Factual Question Answering

Snyder Ben, Moisescu Marius, Zafar Muhammad Bilal. Arxiv 2023

[Paper]    
Applications Attention Mechanism Model Architecture Reinforcement Learning Transformer

While large language models (LLMs) have taken great strides towards helping humans with a plethora of tasks, hallucinations remain a major impediment towards gaining user trust. The fluency and coherence of model generations even when hallucinating makes detection a difficult task. In this work, we explore if the artifacts associated with the model generations can provide hints that the generation will contain hallucinations. Specifically, we probe LLMs at 1) the inputs via Integrated Gradients based token attribution, 2) the outputs via the Softmax probabilities, and 3) the internal state via self-attention and fully-connected layer activations for signs of hallucinations on open-ended question answering tasks. Our results show that the distributions of these artifacts tend to differ between hallucinated and non-hallucinated generations. Building on this insight, we train binary classifiers that use these artifacts as input features to classify model generations into hallucinations and non-hallucinations. These hallucination classifiers achieve up to \(0.80\) AUROC. We also show that tokens preceding a hallucination can already predict the subsequent hallucination even before it occurs.

Similar Work