Reverse That Number! Decoding Order Matters In Arithmetic Learning · The Large Language Model Bible Contribute to LLM-Bible

Reverse That Number! Decoding Order Matters In Arithmetic Learning

Zhang-li Daniel, Lin Nianyi, Yu Jifan, Zhang Zheyuan, Yao Zijun, Zhang Xiaokang, Hou Lei, Zhang Jing, Li Juanzi. Arxiv 2024

[Paper] [Code]    
Has Code Pretraining Methods Reinforcement Learning Training Techniques

Recent advancements in pretraining have demonstrated that modern Large Language Models (LLMs) possess the capability to effectively learn arithmetic operations. However, despite acknowledging the significance of digit order in arithmetic computation, current methodologies predominantly rely on sequential, step-by-step approaches for teaching LLMs arithmetic, resulting in a conclusion where obtaining better performance involves fine-grained step-by-step. Diverging from this conventional path, our work introduces a novel strategy that not only reevaluates the digit order by prioritizing output from the least significant digit but also incorporates a step-by-step methodology to substantially reduce complexity. We have developed and applied this method in a comprehensive set of experiments. Compared to the previous state-of-the-art (SOTA) method, our findings reveal an overall improvement of in accuracy while requiring only a third of the tokens typically used during training. For the purpose of facilitating replication and further research, we have made our code and dataset publicly available at \url{https://anonymous.4open.science/r/RAIT-9FB7/}.

Similar Work