Pre-computed Memory Or On-the-fly Encoding? A Hybrid Approach To Retrieval Augmentation Makes The Most Of Your Compute · The Large Language Model Bible Contribute to LLM-Bible

Pre-computed Memory Or On-the-fly Encoding? A Hybrid Approach To Retrieval Augmentation Makes The Most Of Your Compute

De Jong Michiel, Zemlyanskiy Yury, Fitzgerald Nicholas, Ainslie Joshua, Sanghai Sumit, Sha Fei, Cohen William. Arxiv 2023

[Paper]    
Merging RAG

Retrieval-augmented language models such as Fusion-in-Decoder are powerful, setting the state of the art on a variety of knowledge-intensive tasks. However, they are also expensive, due to the need to encode a large number of retrieved passages. Some work avoids this cost by pre-encoding a text corpus into a memory and retrieving dense representations directly. However, pre-encoding memory incurs a severe quality penalty as the memory representations are not conditioned on the current input. We propose LUMEN, a hybrid between these two extremes, pre-computing the majority of the retrieval representation and completing the encoding on the fly using a live encoder that is conditioned on the question and fine-tuned for the task. We show that LUMEN significantly outperforms pure memory on multiple question-answering tasks while being much cheaper than FiD, and outperforms both for any given compute budget. Moreover, the advantage of LUMEN over FiD increases with model size.

Similar Work