Prompting Large Vision-language Models For Compositional Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Prompting Large Vision-language Models For Compositional Reasoning

Ossowski Timothy, Jiang Ming, Hu Junjie. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Multimodal Models Prompting

Vision-language models such as CLIP have shown impressive capabilities in encoding texts and images into aligned embeddings, enabling the retrieval of multimodal data in a shared embedding space. However, these embedding-based models still face challenges in effectively matching images and texts with similar visio-linguistic compositionality, as evidenced by their performance on the recent Winoground dataset. In this paper, we argue that this limitation stems from two factors: the use of single vector representations for complex multimodal data, and the absence of step-by-step reasoning in these embedding-based methods. To address this issue, we make an exploratory step using a novel generative method that prompts large vision-language models (e.g., GPT-4) to depict images and perform compositional reasoning. Our method outperforms other embedding-based methods on the Winoground dataset, and obtains further improvement of up to 10% accuracy when enhanced with the optimal description.

Similar Work