Precision Empowers, Excess Distracts: Visual Question Answering With Dynamically Infused Knowledge In Language Models · The Large Language Model Bible Contribute to LLM-Bible

Precision Empowers, Excess Distracts: Visual Question Answering With Dynamically Infused Knowledge In Language Models

Jhalani Manas, M Annervaz K, Bhattacharyya Pushpak. Arxiv 2024

[Paper]    
Applications Fine Tuning Model Architecture Multimodal Models Pretraining Methods RAG Training Techniques Transformer

In the realm of multimodal tasks, Visual Question Answering (VQA) plays a crucial role by addressing natural language questions grounded in visual content. Knowledge-Based Visual Question Answering (KBVQA) advances this concept by adding external knowledge along with images to respond to questions. We introduce an approach for KBVQA, augmenting the existing vision-language transformer encoder-decoder (OFA) model. Our main contribution involves enhancing questions by incorporating relevant external knowledge extracted from knowledge graphs, using a dynamic triple extraction method. We supply a flexible number of triples from the knowledge graph as context, tailored to meet the requirements for answering the question. Our model, enriched with knowledge, demonstrates an average improvement of 4.75% in Exact Match Score over the state-of-the-art on three different KBVQA datasets. Through experiments and analysis, we demonstrate that furnishing variable triples for each question improves the reasoning capabilities of the language model in contrast to supplying a fixed number of triples. This is illustrated even for recent large language models. Additionally, we highlight the model’s generalization capability by showcasing its SOTA-beating performance on a small dataset, achieved through straightforward fine-tuning.

Similar Work