Bartphobeit: Pre-trained Sequence-to-sequence And Image Transformers Models For Vietnamese Visual Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Bartphobeit: Pre-trained Sequence-to-sequence And Image Transformers Models For Vietnamese Visual Question Answering

Tran Khiem Vinh, Van Nguyen Kiet, Nguyen Ngan Luu Thuy. Arxiv 2023

[Paper]    
Applications Model Architecture Pretraining Methods Transformer

Visual Question Answering (VQA) is an intricate and demanding task that integrates natural language processing (NLP) and computer vision (CV), capturing the interest of researchers. The English language, renowned for its wealth of resources, has witnessed notable advancements in both datasets and models designed for VQA. However, there is a lack of models that target specific countries such as Vietnam. To address this limitation, we introduce a transformer-based Vietnamese model named BARTPhoBEiT. This model includes pre-trained Sequence-to-Sequence and bidirectional encoder representation from Image Transformers in Vietnamese and evaluates Vietnamese VQA datasets. Experimental results demonstrate that our proposed model outperforms the strong baseline and improves the state-of-the-art in six metrics: Accuracy, Precision, Recall, F1-score, WUPS 0.0, and WUPS 0.9.

Similar Work