Bifurcated Attention: Accelerating Massively Parallel Decoding With Shared Prefixes In Llms · The Large Language Model Bible Contribute to LLM-Bible

Bifurcated Attention: Accelerating Massively Parallel Decoding With Shared Prefixes In Llms

Athiwaratkun Ben, Gonugondla Sujan Kumar, Gouda Sanjay Krishna, Qian Haifeng, Ding Hantian, Sun Qing, Wang Jun, Guo Jiacheng, Chen Liangfu, Bhatia Parminder, Nallapati Ramesh, Sengupta Sudipta, Xiang Bing. Arxiv 2024

[Paper]    
Applications Attention Mechanism Efficiency And Optimization Model Architecture Reinforcement Learning Transformer

This study introduces bifurcated attention, a method designed to enhance language model inference in shared-context batch decoding scenarios. Our approach addresses the challenge of redundant memory IO costs, a critical factor contributing to latency in high batch sizes and extended context lengths. Bifurcated attention achieves this by strategically dividing the attention mechanism during incremental decoding into two separate GEMM operations: one focusing on the KV cache from prefill, and another on the decoding process itself. While maintaining the computational load (FLOPs) of standard attention mechanisms, bifurcated attention ensures precise computation with significantly reduced memory IO. Our empirical results show over 2.1\(\times\) speedup when sampling 16 output sequences and more than 6.2\(\times\) speedup when sampling 32 sequences at context lengths exceeding 8k tokens on a 7B model that uses multi-head attention. The efficiency gains from bifurcated attention translate into lower latency, making it particularly suitable for real-time applications. For instance, it enables massively parallel answer generation without substantially increasing latency, thus enhancing performance when integrated with post-processing techniques such as re-ranking.

Similar Work