Large Language Model For Science: A Study On P Vs. NP · The Large Language Model Bible Contribute to LLM-Bible

Large Language Model For Science: A Study On P Vs. NP

Dong Qingxiu, Dong Li, Xu Ke, Zhou Guangyan, Hao Yaru, Sui Zhifang, Wei Furu. Arxiv 2023

[Paper]    
GPT Model Architecture RAG Tools Uncategorized

In this work, we use large language models (LLMs) to augment and accelerate research on the P versus NP problem, one of the most important open problems in theoretical computer science and mathematics. Specifically, we propose Socratic reasoning, a general framework that promotes in-depth thinking with LLMs for complex problem-solving. Socratic reasoning encourages LLMs to recursively discover, solve, and integrate problems while facilitating self-evaluation and refinement. Our pilot study on the P vs. NP problem shows that GPT-4 successfully produces a proof schema and engages in rigorous reasoning throughout 97 dialogue turns, concluding “P \(\neq\) NP”, which is in alignment with (Xu and Zhou, 2023). The investigation uncovers novel insights within the extensive solution space of LLMs, shedding light on LLM for Science.

Similar Work