Retrieval-augmented Natural Language Reasoning For Explainable Visual Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Retrieval-augmented Natural Language Reasoning For Explainable Visual Question Answering

Lim Su Hyeon, Kim Minkuk, Kim Hyeon Bae, Kim Seong Tae. Arxiv 2024

[Paper]    
Applications Attention Mechanism GPT Interpretability And Explainability Model Architecture RAG

Visual Question Answering with Natural Language Explanation (VQA-NLE) task is challenging due to its high demand for reasoning-based inference. Recent VQA-NLE studies focus on enhancing model networks to amplify the model’s reasoning capability but this approach is resource-consuming and unstable. In this work, we introduce a new VQA-NLE model, ReRe (Retrieval-augmented natural language Reasoning), using leverage retrieval information from the memory to aid in generating accurate answers and persuasive explanations without relying on complex networks and extra datasets. ReRe is an encoder-decoder architecture model using a pre-trained clip vision encoder and a pre-trained GPT-2 language model as a decoder. Cross-attention layers are added in the GPT-2 for processing retrieval features. ReRe outperforms previous methods in VQA accuracy and explanation score and shows improvement in NLE with more persuasive, reliability.

Similar Work