Metavl: Transferring In-context Learning Ability From Language Models To Vision-language Models · The Large Language Model Bible Contribute to LLM-Bible

Metavl: Transferring In-context Learning Ability From Language Models To Vision-language Models

Monajatipoor Masoud, Li Liunian Harold, Rouhsedaghat Mozhdeh, Yang Lin F., Chang Kai-wei. Arxiv 2023

[Paper]    
In Context Learning Multimodal Models Prompting

Large-scale language models have shown the ability to adapt to a new task via conditioning on a few demonstrations (i.e., in-context learning). However, in the vision-language domain, most large-scale pre-trained vision-language (VL) models do not possess the ability to conduct in-context learning. How can we enable in-context learning for VL models? In this paper, we study an interesting hypothesis: can we transfer the in-context learning ability from the language domain to VL domain? Specifically, we first meta-trains a language model to perform in-context learning on NLP tasks (as in MetaICL); then we transfer this model to perform VL tasks by attaching a visual encoder. Our experiments suggest that indeed in-context learning ability can be transferred cross modalities: our model considerably improves the in-context learning capability on VL tasks and can even compensate for the size of the model significantly. On VQA, OK-VQA, and GQA, our method could outperform the baseline model while having 20 times fewer parameters.

Similar Work