Interpretable Visual Question Answering Via Reasoning Supervision · The Large Language Model Bible Contribute to LLM-Bible

Interpretable Visual Question Answering Via Reasoning Supervision

Parelli Maria, Mallis Dimitrios, Diomataris Markos, Pitsikalis Vassilis. Arxiv 2023

[Paper]    
Applications Attention Mechanism Ethics And Bias Model Architecture Multimodal Models Pretraining Methods RAG Training Techniques Transformer

Transformer-based architectures have recently demonstrated remarkable performance in the Visual Question Answering (VQA) task. However, such models are likely to disregard crucial visual cues and often rely on multimodal shortcuts and inherent biases of the language modality to predict the correct answer, a phenomenon commonly referred to as lack of visual grounding. In this work, we alleviate this shortcoming through a novel architecture for visual question answering that leverages common sense reasoning as a supervisory signal. Reasoning supervision takes the form of a textual justification of the correct answer, with such annotations being already available on large-scale Visual Common Sense Reasoning (VCR) datasets. The model’s visual attention is guided toward important elements of the scene through a similarity loss that aligns the learned attention distributions guided by the question and the correct reasoning. We demonstrate both quantitatively and qualitatively that the proposed approach can boost the model’s visual perception capability and lead to performance increase, without requiring training on explicit grounding annotations.

Similar Work