Large Language Models Fail On Trivial Alterations To Theory-of-mind Tasks · The Large Language Model Bible Contribute to LLM-Bible

Large Language Models Fail On Trivial Alterations To Theory-of-mind Tasks

Ullman Tomer. Arxiv 2023

[Paper]    
RAG

Intuitive psychology is a pillar of common-sense reasoning. The replication of this reasoning in machine intelligence is an important stepping-stone on the way to human-like artificial intelligence. Several recent tasks and benchmarks for examining this reasoning in Large-Large Models have focused in particular on belief attribution in Theory-of-Mind tasks. These tasks have shown both successes and failures. We consider in particular a recent purported success case, and show that small variations that maintain the principles of ToM turn the results on their head. We argue that in general, the zero-hypothesis for model evaluation in intuitive psychology should be skeptical, and that outlying failure cases should outweigh average success rates. We also consider what possible future successes on Theory-of-Mind tasks by more powerful LLMs would mean for ToM tasks with people.

Similar Work