Readtwice: Reading Very Large Documents With Memories · The Large Language Model Bible Contribute to LLM-Bible

Readtwice: Reading Very Large Documents With Memories

Zemlyanskiy Yury, Ainslie Joshua, De Jong Michiel, Pham Philip, Eckstein Ilya, Sha Fei. Arxiv 2021

[Paper] [Code]    
Applications Has Code Model Architecture Pretraining Methods Transformer

Knowledge-intensive tasks such as question answering often require assimilating information from different sections of large inputs such as books or article collections. We propose ReadTwice, a simple and effective technique that combines several strengths of prior approaches to model long-range dependencies with Transformers. The main idea is to read text in small segments, in parallel, summarizing each segment into a memory table to be used in a second read of the text. We show that the method outperforms models of comparable size on several question answering (QA) datasets and sets a new state of the art on the challenging NarrativeQA task, with questions about entire books. Source code and pre-trained checkpoints for ReadTwice can be found at https://goo.gle/research-readtwice.

Similar Work