An Empirical Study Of Training End-to-end Vision-and-language Transformers · The Large Language Model Bible Contribute to LLM-Bible

An Empirical Study Of Training End-to-end Vision-and-language Transformers

Dou Zi-yi, Xu Yichong, Gan Zhe, Wang Jianfeng, Wang Shuohang, Wang Lijuan, Zhu Chenguang, Zhang Pengchuan, Yuan Lu, Peng Nanyun, Liu Zicheng, Zeng Michael. Arxiv 2021

[Paper] [Code]    
Attention Mechanism BERT Has Code Merging Model Architecture Multimodal Models Pretraining Methods Tools Training Techniques Transformer

Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks. While recent work has shown that fully transformer-based VL models can be more efficient than previous region-feature-based methods, their performance on downstream tasks often degrades significantly. In this paper, we present METER, a Multimodal End-to-end TransformER framework, through which we investigate how to design and pre-train a fully transformer-based VL model in an end-to-end manner. Specifically, we dissect the model designs along multiple dimensions: vision encoders (e.g., CLIP-ViT, Swin transformer), text encoders (e.g., RoBERTa, DeBERTa), multimodal fusion module (e.g., merged attention vs. co-attention), architectural design (e.g., encoder-only vs. encoder-decoder), and pre-training objectives (e.g., masked image modeling). We conduct comprehensive experiments and provide insights on how to train a performant VL transformer. METER achieves an accuracy of 77.64% on the VQAv2 test-std set using only 4M images for pre-training, surpassing the state-of-the-art region-feature-based model by 1.04%, and outperforming the previous best fully transformer-based model by 1.6%. Notably, when further scaled up, our best VQA model achieves an accuracy of 80.54%. Code and pre-trained models are released at https://github.com/zdou0830/METER.

Similar Work