Language Models Align With Human Judgments On Key Grammatical Constructions · The Large Language Model Bible Contribute to LLM-Bible

Language Models Align With Human Judgments On Key Grammatical Constructions

Hu Jennifer, Mahowald Kyle, Lupyan Gary, Ivanova Anna, Levy Roger. Proceedings of the National Academy of Sciences 2024

[Paper]    
Ethics And Bias Prompting

Do large language models (LLMs) make human-like linguistic generalizations? Dentella et al. (2023) (“DGL”) prompt several LLMs (“Is the following sentence grammatically correct in English?”) to elicit grammaticality judgments of 80 English sentences, concluding that LLMs demonstrate a “yes-response bias” and a “failure to distinguish grammatical from ungrammatical sentences”. We re-evaluate LLM performance using well-established practices and find that DGL’s data in fact provide evidence for just how well LLMs capture human behaviors. Models not only achieve high accuracy overall, but also capture fine-grained variation in human linguistic judgments.

Similar Work