Running Cognitive Evaluations On Large Language Models: The Do's And The Don'ts · The Large Language Model Bible Contribute to LLM-Bible

Running Cognitive Evaluations On Large Language Models: The Do's And The Don'ts

Ivanova Anna A.. Arxiv 2023

[Paper]    
Prompting Tools

In this paper, I describe methodological considerations for studies that aim to evaluate the cognitive capacities of large language models (LLMs) using language-based behavioral assessments. Drawing on three case studies from the literature (a commonsense knowledge benchmark, a theory of mind evaluation, and a test of syntactic agreement), I describe common pitfalls that might arise when applying a cognitive test to an LLM. I then list 10 do’s and don’ts that should help design high-quality cognitive evaluations for AI systems. I conclude by discussing four areas where the do’s and don’ts are currently under active discussion – prompt sensitivity, cultural and linguistic diversity, using LLMs as research assistants, and running evaluations on open vs. closed LLMs. Overall, the goal of the paper is to contribute to the broader discussion of best practices in the rapidly growing field of AI Psychology.

Similar Work