Attention Sorting Combats Recency Bias In Long Context Language Models · The Large Language Model Bible Contribute to LLM-Bible

Attention Sorting Combats Recency Bias In Long Context Language Models

Peysakhovich Alexander, Lerer Adam. Arxiv 2023

[Paper]    
Attention Mechanism Ethics And Bias Model Architecture RAG Reinforcement Learning Training Techniques

Current language models often fail to incorporate long contexts efficiently during generation. We show that a major contributor to this issue are attention priors that are likely learned during pre-training: relevant information located earlier in context is attended to less on average. Yet even when models fail to use the information from a relevant document in their response, they still pay preferential attention to that document compared to an irrelevant document at the same position. We leverage this fact to introduce ``attention sorting’’: perform one step of decoding, sort documents by the attention they receive (highest attention going last), repeat the process, generate the answer with the newly sorted context. We find that attention sorting improves performance of long context models. Our findings highlight some challenges in using off-the-shelf language models for retrieval augmented generation.

Similar Work