Image-of-thought Prompting For Visual Reasoning Refinement In Multimodal Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Image-of-thought Prompting For Visual Reasoning Refinement In Multimodal Large Language Models

Zhou Qiji, Zhou Ruochen, Hu Zike, Lu Panzhong, Gao Siyang, Zhang Yue. Arxiv 2024

[Paper]    
Interpretability And Explainability Multimodal Models Prompting Reinforcement Learning

Recent advancements in Chain-of-Thought (CoT) and related rationale-based works have significantly improved the performance of Large Language Models (LLMs) in complex reasoning tasks. With the evolution of Multimodal Large Language Models (MLLMs), enhancing their capability to tackle complex multimodal reasoning problems is a crucial frontier. However, incorporating multimodal rationales in CoT has yet to be thoroughly investigated. We propose the Image-of-Thought (IoT) prompting method, which helps MLLMs to extract visual rationales step-by-step. Specifically, IoT prompting can automatically design critical visual information extraction operations based on the input images and questions. Each step of visual information refinement identifies specific visual rationales that support answers to complex visual reasoning questions. Beyond the textual CoT, IoT simultaneously utilizes visual and textual rationales to help MLLMs understand complex multimodal information. IoT prompting has improved zero-shot visual reasoning performance across various visual understanding tasks in different MLLMs. Moreover, the step-by-step visual feature explanations generated by IoT prompting elucidate the visual reasoning process, aiding in analyzing the cognitive processes of large multimodal models

Similar Work