Mathchat: Benchmarking Mathematical Reasoning And Instruction Following In Multi-turn Interactions · The Large Language Model Bible Contribute to LLM-Bible

Mathchat: Benchmarking Mathematical Reasoning And Instruction Following In Multi-turn Interactions

Liang Zhenwen, Yu Dian, Yu Wenhao, Yao Wenlin, Zhang Zhihan, Zhang Xiangliang, Yu Dong. Arxiv 2024

[Paper]    
Applications Reinforcement Learning Training Techniques

Large language models (LLMs) have demonstrated impressive capabilities in mathematical problem solving, particularly in single turn question answering formats. However, real world scenarios often involve mathematical question answering that requires multi turn or interactive information exchanges, and the performance of LLMs on these tasks is still underexplored. This paper introduces MathChat, a comprehensive benchmark specifically designed to evaluate LLMs across a broader spectrum of mathematical tasks. These tasks are structured to assess the models’ abilities in multiturn interactions and open ended generation. We evaluate the performance of various SOTA LLMs on the MathChat benchmark, and we observe that while these models excel in single turn question answering, they significantly underperform in more complex scenarios that require sustained reasoning and dialogue understanding. To address the above limitations of existing LLMs when faced with multiturn and open ended tasks, we develop MathChat sync, a synthetic dialogue based math dataset for LLM finetuning, focusing on improving models’ interaction and instruction following capabilities in conversations. Experimental results emphasize the need for training LLMs with diverse, conversational instruction tuning datasets like MathChatsync. We believe this work outlines one promising direction for improving the multiturn mathematical reasoning abilities of LLMs, thus pushing forward the development of LLMs that are more adept at interactive mathematical problem solving and real world applications.

Similar Work