Comparing Humans, GPT-4, And GPT-4V On Abstraction And Reasoning Tasks · The Large Language Model Bible Contribute to LLM-Bible

Comparing Humans, GPT-4, And GPT-4V On Abstraction And Reasoning Tasks

Mitchell Melanie, Palmarini Alessandro B., Moskvichev Arseny. Proceedings of the LLM-CP Workshop AAAI 2023

[Paper]    
GPT Model Architecture Multimodal Models Prompting Reinforcement Learning

We explore the abstract reasoning abilities of text-only and multimodal versions of GPT-4, using the ConceptARC benchmark [10], which is designed to evaluate robust understanding and reasoning with core-knowledge concepts. We extend the work of Moskvichev et al. [10] by evaluating GPT-4 on more detailed, one-shot prompting (rather than simple, zero-shot prompts) with text versions of ConceptARC tasks, and by evaluating GPT-4V, the multimodal version of GPT-4, on zero- and one-shot prompts using image versions of the simplest tasks. Our experimental results support the conclusion that neither version of GPT-4 has developed robust abstraction abilities at humanlike levels.

Similar Work