Position Interpolation Improves Alibi Extrapolation · The Large Language Model Bible Contribute to LLM-Bible

Position Interpolation Improves Alibi Extrapolation

Al-khateeb Faisal, Dey Nolan, Soboleva Daria, Hestness Joel. Arxiv 2023

[Paper]    
Applications Attention Mechanism Ethics And Bias Model Architecture

Linear position interpolation helps pre-trained models using rotary position embeddings (RoPE) to extrapolate to longer sequence lengths. We propose using linear position interpolation to extend the extrapolation range of models using Attention with Linear Biases (ALiBi). We find position interpolation significantly improves extrapolation capability on upstream language modelling and downstream summarization and retrieval tasks.

Similar Work