Response: Emergent Analogical Reasoning In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Response: Emergent Analogical Reasoning In Large Language Models

Hodel Damian, West Jevin. Arxiv 2023

[Paper]    
GPT Model Architecture Uncategorized

In their recent Nature Human Behaviour paper, “Emergent analogical reasoning in large language models,” (Webb, Holyoak, and Lu, 2023) the authors argue that “large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.” In this response, we provide counterexamples of the letter string analogies. In our tests, GPT-3 fails to solve simplest variations of the original tasks, whereas human performance remains consistently high across all modified versions. Zero-shot reasoning is an extraordinary claim that requires extraordinary evidence. We do not see that evidence in our experiments. To strengthen claims of humanlike reasoning such as zero-shot reasoning, it is important that the field develop approaches that rule out data memorization.

Similar Work