A Survey On Efficient Inference For Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

A Survey On Efficient Inference For Large Language Models

Zhou Zixuan, Ning Xuefei, Hong Ke, Fu Tianyu, Xu Jiaming, Li Shiyao, Lou Yuming, Wang Luning, Yuan Zhihang, Li Xiuhong, Yan Shengen, Dai Guohao, Zhang Xiao-ping, Dong Yuhan, Wang Yu. Arxiv 2024

[Paper]    
Attention Mechanism Efficiency And Optimization Model Architecture Survey Paper

Large Language Models (LLMs) have attracted extensive attention due to their remarkable performance across various tasks. However, the substantial computational and memory requirements of LLM inference pose challenges for deployment in resource-constrained scenarios. Efforts within the field have been directed towards developing techniques aimed at enhancing the efficiency of LLM inference. This paper presents a comprehensive survey of the existing literature on efficient LLM inference. We start by analyzing the primary causes of the inefficient LLM inference, i.e., the large model size, the quadratic-complexity attention operation, and the auto-regressive decoding approach. Then, we introduce a comprehensive taxonomy that organizes the current literature into data-level, model-level, and system-level optimization. Moreover, the paper includes comparative experiments on representative methods within critical sub-fields to provide quantitative insights. Last but not least, we provide some knowledge summary and discuss future research directions.

Similar Work