Multimodal Unified Attention Networks For Vision-and-language Interactions · The Large Language Model Bible Contribute to LLM-Bible

Multimodal Unified Attention Networks For Vision-and-language Interactions

Yu Zhou, Cui Yuhao, Yu Jun, Tao Dacheng, Tian Qi. Arxiv 2019

[Paper]    
Applications Attention Mechanism Model Architecture Multimodal Models Transformer

Learning an effective attention mechanism for multimodal data is important in many vision-and-language tasks that require a synergic understanding of both the visual and textual contents. Existing state-of-the-art approaches use co-attention models to associate each visual object (e.g., image region) with each textual object (e.g., query word). Despite the success of these co-attention models, they only model inter-modal interactions while neglecting intra-modal interactions. Here we propose a general `unified attention’ model that simultaneously captures the intra- and inter-modal interactions of multimodal features and outputs their corresponding attended representations. By stacking such unified attention blocks in depth, we obtain the deep Multimodal Unified Attention Network (MUAN), which can seamlessly be applied to the visual question answering (VQA) and visual grounding tasks. We evaluate our MUAN models on two VQA datasets and three visual grounding datasets, and the results show that MUAN achieves top-level performance on both tasks without bells and whistles.

Similar Work