Ddcot: Duty-distinct Chain-of-thought Prompting For Multimodal Reasoning In Language Models · The Large Language Model Bible Contribute to LLM-Bible

Ddcot: Duty-distinct Chain-of-thought Prompting For Multimodal Reasoning In Language Models

Ge Zheng, Bin Yang, Jiajin Tang, Hong-yu Zhou, Sibei Yang. Arxiv 2023

[Paper]    
Fine Tuning Interpretability And Explainability Multimodal Models Pretraining Methods Prompting RAG Training Techniques

A long-standing goal of AI systems is to perform complex multimodal reasoning like humans. Recently, large language models (LLMs) have made remarkable strides in such multi-step reasoning on the language modality solely by leveraging the chain of thought (CoT) to mimic human thinking. However, the transfer of these advancements to multimodal contexts introduces heightened challenges, including but not limited to the impractical need for labor-intensive annotation and the limitations in terms of flexibility, generalizability, and explainability. To evoke CoT reasoning in multimodality, this work first conducts an in-depth analysis of these challenges posed by multimodality and presents two key insights: “keeping critical thinking” and “letting everyone do their jobs” in multimodal CoT reasoning. Furthermore, this study proposes a novel DDCoT prompting that maintains a critical attitude through negative-space prompting and incorporates multimodality into reasoning by first dividing the reasoning responsibility of LLMs into reasoning and recognition and then integrating the visual recognition capability of visual models into the joint reasoning process. The rationales generated by DDCoT not only improve the reasoning abilities of both large and small language models in zero-shot prompting and fine-tuning learning, significantly outperforming state-of-the-art methods but also exhibit impressive generalizability and explainability.

Similar Work