Language Models With Image Descriptors Are Strong Few-shot Video-language Learners · The Large Language Model Bible Contribute to LLM-Bible

Language Models With Image Descriptors Are Strong Few-shot Video-language Learners

Wang Zhenhailong, Li Manling, Xu Ruochen, Zhou Luowei, Lei Jie, Lin Xudong, Wang Shuohang, Yang Ziyi, Zhu Chenguang, Hoiem Derek, Chang Shih-fu, Bansal Mohit, Ji Heng. Arxiv 2022

[Paper] [Code]    
Applications Few Shot Has Code Pretraining Methods Prompting Training Techniques

The goal of this work is to build flexible video-language models that can generalize to various video-to-text tasks from few examples, such as domain-specific captioning, question answering, and future event prediction. Existing few-shot video-language learners focus exclusively on the encoder, resulting in the absence of a video-to-text decoder to handle generative tasks. Video captioners have been pretrained on large-scale video-language datasets, but they rely heavily on finetuning and lack the ability to generate text for unseen tasks in a few-shot setting. We propose VidIL, a few-shot Video-language Learner via Image and Language models, which demonstrates strong performance on few-shot video-to-text tasks without the necessity of pretraining or finetuning on any video datasets. We use the image-language models to translate the video content into frame captions, object, attribute, and event phrases, and compose them into a temporal structure template. We then instruct a language model, with a prompt containing a few in-context examples, to generate a target output from the composed content. The flexibility of prompting allows the model to capture any form of text input, such as automatic speech recognition (ASR) transcripts. Our experiments demonstrate the power of language models in understanding videos on a wide variety of video-language tasks, including video captioning, video question answering, video caption retrieval, and video future event prediction. Especially, on video future event prediction, our few-shot model significantly outperforms state-of-the-art supervised models trained on large-scale video datasets. Code and resources are publicly available for research purposes at https://github.com/MikeWangWZHL/VidIL .

Similar Work