Towards A Unified Multimodal Reasoning Framework · The Large Language Model Bible Contribute to LLM-Bible

Towards A Unified Multimodal Reasoning Framework

Arun Abhinav, Mal Dipendra Singh, Soni Mehul, Sawada Tomohiro. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Multimodal Models Reinforcement Learning Tools

Recent advancements in deep learning have led to the development of powerful language models (LMs) that excel in various tasks. Despite these achievements, there is still room for improvement, particularly in enhancing reasoning abilities and incorporating multimodal data. This report investigates the potential impact of combining Chain-of-Thought (CoT) reasoning and Visual Question Answering (VQA) techniques to improve LM’s accuracy in solving multiple-choice questions. By employing TextVQA and ScienceQA datasets, we assessed the effectiveness of three text embedding methods and three visual embedding approaches. Our experiments aimed to fill the gap in current research by investigating the combined impact of CoT and VQA, contributing to the understanding of how these techniques can improve the reasoning capabilities of state-of-the-art models like GPT-4. Results from our experiments demonstrated the potential of these approaches in enhancing LM’s reasoning and question-answering capabilities, providing insights for further research and development in the field, and paving the way for more accurate and reliable AI systems that can handle complex reasoning tasks across multiple modalities.

Similar Work