Assessing GPT4-V On Structured Reasoning Tasks · The Large Language Model Bible Contribute to LLM-Bible

Assessing GPT4-V On Structured Reasoning Tasks

Singh Mukul, Cambronero José, Gulwani Sumit, Le Vu, Verbruggen Gust. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Multimodal Models Prompting

Multi-modality promises to unlock further uses for large language models. Recently, the state-of-the-art language model GPT-4 was enhanced with vision capabilities. We carry out a prompting evaluation of GPT-4V and five other baselines on structured reasoning tasks, such as mathematical reasoning, visual data analysis, and code generation. We show that visual Chain-of-Thought, an extension of Chain-of-Thought to multi-modal LLMs, yields significant improvements over the vanilla model. We also present a categorized analysis of scenarios where these models perform well and where they struggle, highlighting challenges associated with coherent multimodal reasoning.

Similar Work