Viclevr: A Visual Reasoning Dataset And Hybrid Multimodal Fusion Model For Visual Question Answering In Vietnamese · The Large Language Model Bible Contribute to LLM-Bible

Viclevr: A Visual Reasoning Dataset And Hybrid Multimodal Fusion Model For Visual Question Answering In Vietnamese

Tran Khiem Vinh, Phan Hao Phu, Van Nguyen Kiet, Nguyen Ngan Luu Thuy. Arxiv 2023

[Paper] [Code]    
Applications Attention Mechanism Ethics And Bias Has Code Merging Model Architecture Multimodal Models Pretraining Methods RAG Reinforcement Learning Transformer

In recent years, Visual Question Answering (VQA) has gained significant attention for its diverse applications, including intelligent car assistance, aiding visually impaired individuals, and document image information retrieval using natural language queries. VQA requires effective integration of information from questions and images to generate accurate answers. Neural models for VQA have made remarkable progress on large-scale datasets, with a primary focus on resource-rich languages like English. To address this, we introduce the ViCLEVR dataset, a pioneering collection for evaluating various visual reasoning capabilities in Vietnamese while mitigating biases. The dataset comprises over 26,000 images and 30,000 question-answer pairs (QAs), each question annotated to specify the type of reasoning involved. Leveraging this dataset, we conduct a comprehensive analysis of contemporary visual reasoning systems, offering valuable insights into their strengths and limitations. Furthermore, we present PhoVIT, a comprehensive multimodal fusion that identifies objects in images based on questions. The architecture effectively employs transformers to enable simultaneous reasoning over textual and visual data, merging both modalities at an early model stage. The experimental findings demonstrate that our proposed model achieves state-of-the-art performance across four evaluation metrics. The accompanying code and dataset have been made publicly accessible at \url{https://github.com/kvt0012/ViCLEVR}. This provision seeks to stimulate advancements within the research community, fostering the development of more multimodal fusion algorithms, specifically tailored to address the nuances of low-resource languages, exemplified by Vietnamese.

Similar Work