Logicvista: Multimodal LLM Logical Reasoning Benchmark In Visual Contexts · The Large Language Model Bible Contribute to LLM-Bible

Logicvista: Multimodal LLM Logical Reasoning Benchmark In Visual Contexts

Xiao Yijia, Sun Edward, Liu Tianyu, Wang Wei. Arxiv 2024

[Paper] [Code]    
Has Code Multimodal Models

We propose LogicVista, an evaluation benchmark that assesses the integrated logical reasoning capabilities of multimodal large language models (MLLMs) in Visual contexts. Recent advancements in MLLMs have demonstrated various fascinating abilities, from crafting poetry based on an image to performing mathematical reasoning. However, there is still a lack of systematic evaluation of MLLMs’ proficiency in logical reasoning tasks, which are essential for activities like navigation and puzzle-solving. Thus we evaluate general logical cognition abilities across 5 logical reasoning tasks encompassing 9 different capabilities, using a sample of 448 multiple-choice questions. Each question is annotated with the correct answer and the human-written reasoning behind the selection, enabling both open-ended and multiple-choice evaluation. A total of 8 MLLMs are comprehensively evaluated using LogicVista. Code and Data Available at https://github.com/Yijia-Xiao/LogicVista.

Similar Work