Interpreting And Improving Large Language Models In Arithmetic Calculation · The Large Language Model Bible Contribute to LLM-Bible

Interpreting And Improving Large Language Models In Arithmetic Calculation

Zhang Wei, Wan Chaoqun, Zhang Yonggang, Cheung Yiu-ming, Tian Xinmei, Shen Xu, Ye Jieping. Arxiv 2024

[Paper]    
Applications Attention Mechanism Fine Tuning Model Architecture Pretraining Methods Prompting Training Techniques

Large language models (LLMs) have demonstrated remarkable potential across numerous applications and have shown an emergent ability to tackle complex reasoning tasks, such as mathematical computations. However, even for the simplest arithmetic calculations, the intrinsic mechanisms behind LLMs remain mysterious, making it challenging to ensure reliability. In this work, we delve into uncovering a specific mechanism by which LLMs execute calculations. Through comprehensive experiments, we find that LLMs frequently involve a small fraction (< 5%) of attention heads, which play a pivotal role in focusing on operands and operators during calculation processes. Subsequently, the information from these operands is processed through multi-layer perceptrons (MLPs), progressively leading to the final solution. These pivotal heads/MLPs, though identified on a specific dataset, exhibit transferability across different datasets and even distinct tasks. This insight prompted us to investigate the potential benefits of selectively fine-tuning these essential heads/MLPs to boost the LLMs’ computational performance. We empirically find that such precise tuning can yield notable enhancements on mathematical prowess, without compromising the performance on non-mathematical tasks. Our work serves as a preliminary exploration into the arithmetic calculation abilities inherent in LLMs, laying a solid foundation to reveal more intricate mathematical tasks.

Similar Work