S\(^3\)c-math: Spontaneous Step-level Self-correction Makes Large Language Models Better Mathematical Reasoners · The Large Language Model Bible Contribute to LLM-Bible

S\(^3\)c-math: Spontaneous Step-level Self-correction Makes Large Language Models Better Mathematical Reasoners

Yan Yuchen, Jiang Jin, Liu Yang, Cao Yixin, Xu Xin, Zhang Mengdi, Cai Xunliang, Shao Jian. Arxiv 2024

[Paper]    
Training Techniques

Self-correction is a novel method that can stimulate the potential reasoning abilities of large language models (LLMs). It involves detecting and correcting errors during the inference process when LLMs solve reasoning problems. However, recent works do not regard self-correction as a spontaneous and intrinsic capability of LLMs. Instead, such correction is achieved through post-hoc generation, external knowledge introduction, multi-model collaboration, and similar techniques. In this paper, we propose a series of mathematical LLMs called S\(^3\)c-Math, which are able to perform Spontaneous Step-level Self-correction for Mathematical reasoning. This capability helps LLMs to recognize whether their ongoing inference tends to contain errors and simultaneously correct these errors to produce a more reliable response. We proposed a method, which employs a step-level sampling approach to construct step-wise self-correction data for achieving such ability. Additionally, we implement a training strategy that uses above constructed data to equip LLMs with spontaneous step-level self-correction capacities. Our data and methods have been demonstrated to be effective across various foundation LLMs, consistently showing significant progress in evaluations on GSM8K, MATH, and other mathematical benchmarks. To the best of our knowledge, we are the first to introduce the spontaneous step-level self-correction ability of LLMs in mathematical reasoning.

Similar Work