Llms Achieve Adult Human Performance On Higher-order Theory Of Mind Tasks · The Large Language Model Bible Contribute to LLM-Bible

Llms Achieve Adult Human Performance On Higher-order Theory Of Mind Tasks

Street Winnie, Siy John Oliver, Keeling Geoff, Baranes Adrien, Barnett Benjamin, Mckibben Michael, Kanyere Tatenda, Lentz Alison, Arcas Blaise Aguera Y, Dunbar Robin I. M.. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Uncategorized

This paper examines the extent to which large language models (LLMs) have developed higher-order theory of mind (ToM); the human ability to reason about multiple mental and emotional states in a recursive manner (e.g. I think that you believe that she knows). This paper builds on prior work by introducing a handwritten test suite – Multi-Order Theory of Mind Q&A – and using it to compare the performance of five LLMs to a newly gathered adult human benchmark. We find that GPT-4 and Flan-PaLM reach adult-level and near adult-level performance on ToM tasks overall, and that GPT-4 exceeds adult performance on 6th order inferences. Our results suggest that there is an interplay between model size and finetuning for the realisation of ToM abilities, and that the best-performing LLMs have developed a generalised capacity for ToM. Given the role that higher-order ToM plays in a wide range of cooperative and competitive human behaviours, these findings have significant implications for user-facing LLM applications.

Similar Work