Repetition Improves Language Model Embeddings · The Large Language Model Bible Contribute to LLM-Bible

Repetition Improves Language Model Embeddings

Springer Jacob Mitchell, Kotha Suhas, Fried Daniel, Neubig Graham, Raghunathan Aditi. Arxiv 2024

[Paper]    
Fine Tuning GPT Language Modeling Pretraining Methods RAG Reinforcement Learning Training Techniques

Recent approaches to improving the extraction of text embeddings from autoregressive large language models (LLMs) have largely focused on improvements to data, backbone pretrained language models, or improving task-differentiation via instructions. In this work, we address an architectural limitation of autoregressive models: token embeddings cannot contain information from tokens that appear later in the input. To address this limitation, we propose a simple approach, “echo embeddings,” in which we repeat the input twice in context and extract embeddings from the second occurrence. We show that echo embeddings of early tokens can encode information about later tokens, allowing us to maximally leverage high-quality LLMs for embeddings. On the MTEB leaderboard, echo embeddings improve over classical embeddings by over 9% zero-shot and by around 0.7% when fine-tuned. Echo embeddings with a Mistral-7B model achieve state-of-the-art compared to prior open source models that do not leverage synthetic fine-tuning data.

Similar Work