Pre-training Image-language Transformers For Open-vocabulary Tasks · The Large Language Model Bible Contribute to LLM-Bible

Pre-training Image-language Transformers For Open-vocabulary Tasks

Piergiovanni Aj, Kuo Weicheng, Angelova Anelia. Arxiv 2022

[Paper]    
Applications Model Architecture Multimodal Models Pretraining Methods Training Techniques Transformer

We present a pre-training approach for vision and language transformer models, which is based on a mixture of diverse tasks. We explore both the use of image-text captioning data in pre-training, which does not need additional supervision, as well as object-aware strategies to pre-train the model. We evaluate the method on a number of textgenerative vision+language tasks, such as Visual Question Answering, visual entailment and captioning, and demonstrate large gains over standard pre-training methods.

Similar Work