Theory Of Mind In Large Language Models: Examining Performance Of 11 State-of-the-art Models Vs. Children Aged 7-10 On Advanced Tests · The Large Language Model Bible Contribute to LLM-Bible

Theory Of Mind In Large Language Models: Examining Performance Of 11 State-of-the-art Models Vs. Children Aged 7-10 On Advanced Tests

Van Duijn Max J., Van Dijk Bram M. A., Kouwenhoven Tom, De Valk Werner, Spruit Marco R., Van Der Putten Peter. Arxiv 2023

[Paper]    
Fine Tuning GPT Merging Model Architecture Prompting Reinforcement Learning Security

To what degree should we ascribe cognitive capacities to Large Language Models (LLMs), such as the ability to reason about intentions and beliefs known as Theory of Mind (ToM)? Here we add to this emerging debate by (i) testing 11 base- and instruction-tuned LLMs on capabilities relevant to ToM beyond the dominant false-belief paradigm, including non-literal language usage and recursive intentionality; (ii) using newly rewritten versions of standardized tests to gauge LLMs’ robustness; (iii) prompting and scoring for open besides closed questions; and (iv) benchmarking LLM performance against that of children aged 7-10 on the same tasks. We find that instruction-tuned LLMs from the GPT family outperform other models, and often also children. Base-LLMs are mostly unable to solve ToM tasks, even with specialized prompting. We suggest that the interlinked evolution and development of language and ToM may help explain what instruction-tuning adds: rewarding cooperative communication that takes into account interlocutor and context. We conclude by arguing for a nuanced perspective on ToM in LLMs.

Similar Work