Zero-shot Image Captioning By Anchor-augmented Vision-language Space Alignment · The Large Language Model Bible Contribute to LLM-Bible

Zero-shot Image Captioning By Anchor-augmented Vision-language Space Alignment

Wang Junyang, Zhang Yi, Yan Ming, Zhang Ji, Sang Jitao. Arxiv 2022

[Paper]    
Attention Mechanism Efficiency And Optimization Model Architecture Multimodal Models Training Techniques

CLIP (Contrastive Language-Image Pre-Training) has shown remarkable zero-shot transfer capabilities in cross-modal correlation tasks such as visual classification and image retrieval. However, its performance in cross-modal generation tasks like zero-shot image captioning remains unsatisfied. In this work, we discuss that directly employing CLIP for zero-shot image captioning relies more on the textual modality in context and largely ignores the visual information, which we call contextual language prior. To address this, we propose Cross-modal Language Models (CLMs) to facilitate unsupervised cross-modal learning. We further propose Anchor Augment to guide the generative model’s attention to the fine-grained information in the representation of CLIP. Experiments on MS COCO and Flickr 30K validate the promising performance of proposed approach in both captioning quality and computational efficiency.

Similar Work