Visualbert: A Simple And Performant Baseline For Vision And Language · The Large Language Model Bible Contribute to LLM-Bible

Visualbert: A Simple And Performant Baseline For Vision And Language

Li Liunian Harold, Yatskar Mark, Yin Da, Hsieh Cho-jui, Chang Kai-wei. Arxiv 2019

[Paper]    
Attention Mechanism BERT Model Architecture Pretraining Methods Tools Training Techniques Transformer

We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.

Similar Work