A Survey Of Vision-language Pre-trained Models · The Large Language Model Bible Contribute to LLM-Bible

A Survey Of Vision-language Pre-trained Models

Du Yifan, Liu Zikang, Li Junyi, Zhao Wayne Xin. Arxiv 2022

[Paper]    
Model Architecture Multimodal Models Pretraining Methods Survey Paper Training Techniques Transformer

As transformer evolves, pre-trained models have advanced at a breakneck pace in recent years. They have dominated the mainstream techniques in natural language processing (NLP) and computer vision (CV). How to adapt pre-training to the field of Vision-and-Language (V-L) learning and improve downstream task performance becomes a focus of multimodal learning. In this paper, we review the recent progress in Vision-Language Pre-Trained Models (VL-PTMs). As the core content, we first briefly introduce several ways to encode raw images and texts to single-modal embeddings before pre-training. Then, we dive into the mainstream architectures of VL-PTMs in modeling the interaction between text and image representations. We further present widely-used pre-training tasks, and then we introduce some common downstream tasks. We finally conclude this paper and present some promising research directions. Our survey aims to provide researchers with synthesis and pointer to related research.

Similar Work