Enhancing And Accelerating Large Language Models Via Instruction-aware Contextual Compression · The Large Language Model Bible Contribute to LLM-Bible

Enhancing And Accelerating Large Language Models Via Instruction-aware Contextual Compression

Hou Haowen, Ma Fei, Bai Binwen, Zhu Xinxin, Yu Fei. Arxiv 2024

[Paper]    
Attention Mechanism Efficiency And Optimization Model Architecture RAG

Large Language Models (LLMs) have garnered widespread attention due to their remarkable performance across various tasks. However, to mitigate the issue of hallucinations, LLMs often incorporate retrieval-augmented pipeline to provide them with rich external knowledge and context. Nevertheless, challenges stem from inaccurate and coarse-grained context retrieved from the retriever. Supplying irrelevant context to the LLMs can result in poorer responses, increased inference latency, and higher costs. This paper introduces a method called Instruction-Aware Contextual Compression, which filters out less informative content, thereby accelerating and enhancing the use of LLMs. The experimental results demonstrate that Instruction-Aware Contextual Compression notably reduces memory consumption and minimizes generation latency while maintaining performance levels comparable to those achieved with the use of the full context. Specifically, we achieved a 50% reduction in context-related costs, resulting in a 5% reduction in inference memory usage and a 2.2-fold increase in inference speed, with only a minor drop of 0.047 in Rouge-1. These findings suggest that our method strikes an effective balance between efficiency and performance.

Similar Work