Semvlp: Vision-language Pre-training By Aligning Semantics At Multiple Levels · The Large Language Model Bible Contribute to LLM-Bible

Semvlp: Vision-language Pre-training By Aligning Semantics At Multiple Levels

Li Chenliang, Yan Ming, Xu Haiyang, Luo Fuli, Wang Wei, Bi Bin, Huang Songfang. Arxiv 2021

[Paper]    
Attention Mechanism Model Architecture Multimodal Models Pretraining Methods Reinforcement Learning Tools Training Techniques Transformer

Vision-language pre-training (VLP) on large-scale image-text pairs has recently witnessed rapid progress for learning cross-modal representations. Existing pre-training methods either directly concatenate image representation and text representation at a feature level as input to a single-stream Transformer, or use a two-stream cross-modal Transformer to align the image-text representation at a high-level semantic space. In real-world image-text data, we observe that it is easy for some of the image-text pairs to align simple semantics on both modalities, while others may be related after higher-level abstraction. Therefore, in this paper, we propose a new pre-training method SemVLP, which jointly aligns both the low-level and high-level semantics between image and text representations. The model is pre-trained iteratively with two prevalent fashions: single-stream pre-training to align at a fine-grained feature level and two-stream pre-training to align high-level semantics, by employing a shared Transformer network with a pluggable cross-modal attention module. An extensive set of experiments have been conducted on four well-established vision-language understanding tasks to demonstrate the effectiveness of the proposed SemVLP in aligning cross-modal representations towards different semantic granularities.

Similar Work