DSI++: Updating Transformer Memory With New Documents · The Large Language Model Bible Contribute to LLM-Bible

DSI++: Updating Transformer Memory With New Documents

Mehta Sanket Vaibhav, Gupta Jai, Tay Yi, Dehghani Mostafa, Tran Vinh Q., Rao Jinfeng, Najork Marc, Strubell Emma, Metzler Donald. Arxiv 2022

[Paper]    
Model Architecture Pretraining Methods RAG Training Techniques Transformer

Differentiable Search Indices (DSIs) encode a corpus of documents in model parameters and use the same model to answer user queries directly. Despite the strong performance of DSI models, deploying them in situations where the corpus changes over time is computationally expensive because reindexing the corpus requires re-training the model. In this work, we introduce DSI++, a continual learning challenge for DSI to incrementally index new documents while being able to answer queries related to both previously and newly indexed documents. Across different model scales and document identifier representations, we show that continual indexing of new documents leads to considerable forgetting of previously indexed documents. We also hypothesize and verify that the model experiences forgetting events during training, leading to unstable learning. To mitigate these issues, we investigate two approaches. The first focuses on modifying the training dynamics. Flatter minima implicitly alleviate forgetting, so we optimize for flatter loss basins and show that the model stably memorizes more documents (\(+12%\)). Next, we introduce a generative memory to sample pseudo-queries for documents and supplement them during continual indexing to prevent forgetting for the retrieval task. Extensive experiments on novel continual indexing benchmarks based on Natural Questions (NQ) and MS MARCO demonstrate that our proposed solution mitigates forgetting significantly. Concretely, it improves the average Hits@10 by \(+21.1%\) over competitive baselines for NQ and requires \(6\) times fewer model updates compared to re-training the DSI model for incrementally indexing five corpora in a sequence.

Similar Work