Can Large Language Models Reason? A Characterization Via 3-SAT · The Large Language Model Bible Contribute to LLM-Bible

Can Large Language Models Reason? A Characterization Via 3-SAT

Hazra Rishi, Venturato Gabriele, Martires Pedro Zuidberg Dos, De Raedt Luc. Arxiv 2024

[Paper]    
Training Techniques Uncategorized

Large Language Models (LLMs) are said to possess advanced reasoning abilities. However, some skepticism exists as recent works show how LLMs often bypass true reasoning using shortcuts. Current methods for assessing the reasoning abilities of LLMs typically rely on open-source benchmarks that may be overrepresented in LLM training data, potentially skewing performance. We instead provide a computational theory perspective of reasoning, using 3-SAT – the prototypical NP-complete problem that lies at the core of logical reasoning and constraint satisfaction tasks. By examining the phase transitions in 3-SAT, we empirically characterize the reasoning abilities of LLMs and show how they vary with the inherent hardness of the problems. Our experimental evidence shows that LLMs cannot perform true reasoning, as is required for solving 3-SAT problems.

Similar Work