Nomad-attention: Efficient LLM Inference On Cpus Through Multiply-add-free Attention · The Large Language Model Bible Contribute to LLM-Bible

Nomad-attention: Efficient LLM Inference On Cpus Through Multiply-add-free Attention

Zhang Tianyi, Yi Jonah Wonkyu, Yao Bowen, Xu Zhaozhuo, Shrivastava Anshumali. Arxiv 2024

[Paper] [Code]    
Attention Mechanism Has Code Model Architecture RAG

Large language model inference on Central Processing Units (CPU) is challenging due to the vast quantities of expensive Multiply-Add (MAD) matrix operations in the attention computations. In this paper, we argue that there is a rare gem in modern CPUs, Single-Instruction-Multiple-Data (SIMD) registers, which allow for ultra-low-latency lookups in batch. We leverage this unique capability of CPUs to propose NoMAD-Attention, an efficient attention algorithm that replaces MAD operations with in-register lookups. Through hardware-aware algorithmic designs, NoMAD-Attention achieves the computation of attention scores using repeated fast accesses to SIMD registers despite their highly limited sizes. Moreover, NoMAD-Attention works with pre-trained attention-based LLMs without model finetuning. Empirical evaluations demonstrate that NoMAD-Attention maintains the quality of the original LLMs well, and speeds up the 4-bit quantized LLaMA-7B-based model by up to 2\(\times\) at 16k context length. Our results are reproducible at https://github.com/tonyzhang617/nomad-dist.

Similar Work