Visual Hallucination: Definition, Quantification, And Prescriptive Remediations · The Large Language Model Bible Contribute to LLM-Bible

Visual Hallucination: Definition, Quantification, And Prescriptive Remediations

Rani Anku, Rawte Vipula, Sharma Harshad, Anand Neeraj, Rajbangshi Krishnav, Sheth Amit, Das Amitava. Arxiv 2024

[Paper]    
Applications Ethics And Bias Multimodal Models Reinforcement Learning Responsible AI

The troubling rise of hallucination presents perhaps the most significant impediment to the advancement of responsible AI. In recent times, considerable research has focused on detecting and mitigating hallucination in Large Language Models (LLMs). However, it’s worth noting that hallucination is also quite prevalent in Vision-Language models (VLMs). In this paper, we offer a fine-grained discourse on profiling VLM hallucination based on two tasks: i) image captioning, and ii) Visual Question Answering (VQA). We delineate eight fine-grained orientations of visual hallucination: i) Contextual Guessing, ii) Identity Incongruity, iii) Geographical Erratum, iv) Visual Illusion, v) Gender Anomaly, vi) VLM as Classifier, vii) Wrong Reading, and viii) Numeric Discrepancy. We curate Visual HallucInation eLiciTation (VHILT), a publicly available dataset comprising 2,000 samples generated using eight VLMs across two tasks of captioning and VQA along with human annotations for the categories as mentioned earlier.

Similar Work