NTSEBENCH: Cognitive Reasoning Benchmark For Vision Language Models · The Large Language Model Bible Contribute to LLM-Bible

NTSEBENCH: Cognitive Reasoning Benchmark For Vision Language Models

Pandya Pranshu, Talwarr Agney S, Gupta Vatsal, Kataria Tushar, Gupta Vivek, Roth Dan. Arxiv 2024

[Paper]    
Training Techniques Uncategorized

Cognitive textual and visual reasoning tasks, such as puzzles, series, and analogies, demand the ability to quickly reason, decipher, and evaluate patterns both textually and spatially. While LLMs and VLMs, through extensive training on large amounts of human-curated data, have attained a high level of pseudo-human intelligence in some common sense reasoning tasks, they still struggle with more complex reasoning tasks that require cognitive understanding. In this work, we introduce a new dataset, NTSEBench, designed to evaluate the cognitive multi-modal reasoning and problem-solving skills of large models. The dataset comprises 2,728 multiple-choice questions comprising of a total of 4,642 images across 26 categories sampled from the NTSE examination conducted nationwide in India, featuring both visual and textual general aptitude questions that do not rely on rote learning. We establish baselines on the dataset using state-of-the-art LLMs and VLMs. To facilitate a comparison between open source and propriety models, we propose four distinct modeling strategies to handle different modalities (text and images) in the dataset instances.

Similar Work