From Introspection To Best Practices: Principled Analysis Of Demonstrations In Multimodal In-context Learning · The Large Language Model Bible Contribute to LLM-Bible

From Introspection To Best Practices: Principled Analysis Of Demonstrations In Multimodal In-context Learning

Xu Nan, Wang Fei, Zhang Sheng, Poon Hoifung, Chen Muhao. Arxiv 2024

[Paper]    
Ethics And Bias In Context Learning Multimodal Models Pretraining Methods Prompting Training Techniques

Motivated by in-context learning (ICL) capabilities of Large Language models (LLMs), multimodal LLMs with additional visual modality are also exhibited with similar ICL abilities when multiple image-text pairs are provided as demonstrations. However, relatively less work has been done to investigate the principles behind how and why multimodal ICL works. We conduct a systematic and principled evaluation of multimodal ICL for models of different scales on a broad spectrum of new yet critical tasks. Through perturbations over different modality information, we show that modalities matter differently across tasks in multimodal ICL. Considering such modality impact, we further utilize modality-driven demonstration strategies to boost ICL performance. We also identify that demonstration selection is closely related to the models’ ability to capture task inductive biases from multimodal ICL. Our principled analysis provides a comprehensive way of understanding the role of demonstrations in multimodal in-context learning, and sheds light on effectively improving multimodal ICL on a wide range of tasks even if those tasks are not seen in or even contradict pretraining data.

Similar Work