Solving Challenging Math Word Problems Using GPT-4 Code Interpreter With Code-based Self-verification · The Large Language Model Bible Contribute to LLM-Bible

Solving Challenging Math Word Problems Using GPT-4 Code Interpreter With Code-based Self-verification

Zhou Aojun, Wang Ke, Lu Zimu, Shi Weikang, Luo Sichun, Qin Zipeng, Lu Shaoqing, Jia Anya, Song Linqi, Zhan Mingjie, Li Hongsheng. Arxiv 2023

[Paper]    
GPT Model Architecture Prompting RAG

Recent progress in large language models (LLMs) like GPT-4 and PaLM-2 has brought significant advancements in addressing math reasoning problems. In particular, OpenAI’s latest version of GPT-4, known as GPT-4 Code Interpreter, shows remarkable performance on challenging math datasets. In this paper, we explore the effect of code on enhancing LLMs’ reasoning capability by introducing different constraints on the \textit{Code Usage Frequency} of GPT-4 Code Interpreter. We found that its success can be largely attributed to its powerful skills in generating and executing code, evaluating the output of code execution, and rectifying its solution when receiving unreasonable outputs. Based on this insight, we propose a novel and effective prompting method, explicit \uline{c}ode-based \uline{s}elf-\uline{v}erification~(CSV), to further boost the mathematical reasoning potential of GPT-4 Code Interpreter. This method employs a zero-shot prompt on GPT-4 Code Interpreter to encourage it to use code to self-verify its answers. In instances where the verification state registers as ``False’’, the model shall automatically amend its solution, analogous to our approach of rectifying errors during a mathematics examination. Furthermore, we recognize that the states of the verification result indicate the confidence of a solution, which can improve the effectiveness of majority voting. With GPT-4 Code Interpreter and CSV, we achieve an impressive zero-shot accuracy on MATH dataset \textbf{(53.9% \(\to\) 84.3%)}.

Similar Work