Boosting Large Language Models With Socratic Method For Conversational Mathematics Teaching · The Large Language Model Bible Contribute to LLM-Bible

Boosting Large Language Models With Socratic Method For Conversational Mathematics Teaching

Ding Yuyang, Hu Hanglei, Zhou Jie, Chen Qin, Jiang Bo, He Liang. Arxiv 2024

[Paper] [Code]    
Applications Has Code Reinforcement Learning Survey Paper

With the introduction of large language models (LLMs), automatic math reasoning has seen tremendous success. However, current methods primarily focus on providing solutions or using techniques like Chain-of-Thought to enhance problem-solving accuracy. In this paper, we focus on improving the capability of mathematics teaching via a Socratic teaching-based LLM (\texttt{SocraticLLM}), which guides learners toward profound thinking with clarity and self-discovery via conversation. We collect and release a high-quality mathematical teaching dataset, named \texttt{SocraticMATH}, which provides Socratic-style conversations of problems with extra knowledge. Also, we propose a knowledge-enhanced LLM as a strong baseline to generate reliable responses with review, guidance/heuristic, rectification, and summarization. Experimental results show the great advantages of \texttt{SocraticLLM} by comparing it with several strong generative models. The codes and datasets are available on \url{https://github.com/ECNU-ICALK/SocraticMath}.

Similar Work