Prompt Compression With Context-aware Sentence Encoding For Fast And Improved LLM Inference · The Large Language Model Bible Contribute to LLM-Bible

Prompt Compression With Context-aware Sentence Encoding For Fast And Improved LLM Inference

Liskavets Barys, Ushakov Maxim, Roy Shuvendu, Klibanov Mark, Etemad Ali, Luke Shane. Arxiv 2024

[Paper] [Code]    
Efficiency And Optimization Has Code Prompting

Large language models (LLMs) have triggered a new stream of research focusing on compressing the context length to reduce the computational cost while ensuring the retention of helpful information for LLMs to answer the given question. Token-based removal methods are one of the most prominent approaches in this direction, but risk losing the semantics of the context caused by intermediate token removal, especially under high compression ratios, while also facing challenges in computational efficiency. In this work, we propose context-aware prompt compression (CPC), a sentence-level prompt compression technique where its key innovation is a novel context-aware sentence encoder that provides a relevance score for each sentence for a given question. To train this encoder, we generate a new dataset consisting of questions, positives, and negative pairs where positives are sentences relevant to the question, while negatives are irrelevant context sentences. We train the encoder in a contrastive setup to learn context-aware sentence representations. Our method considerably outperforms prior works on prompt compression on benchmark datasets and is up to 10.93x faster at inference compared to the best token-level compression method. We also find better improvement for shorter length constraints in most benchmarks, showing the effectiveness of our proposed solution in the compression of relevant information in a shorter context. Finally, we release the code and the dataset for quick reproducibility and further development: https://github.com/Workday/cpc.

Similar Work