Ror: Read-over-read For Long Document Machine Reading Comprehension · The Large Language Model Bible Contribute to LLM-Bible

Ror: Read-over-read For Long Document Machine Reading Comprehension

Zhao Jing, Bao Junwei, Wang Yifan, Zhou Yongwei, Wu Youzheng, He Xiaodong, Zhou Bowen. Arxiv 2021

[Paper] [Code]    
BERT Has Code Model Architecture Pretraining Methods Transformer

Transformer-based pre-trained models, such as BERT, have achieved remarkable results on machine reading comprehension. However, due to the constraint of encoding length (e.g., 512 WordPiece tokens), a long document is usually split into multiple chunks that are independently read. It results in the reading field being limited to individual chunks without information collaboration for long document machine reading comprehension. To address this problem, we propose RoR, a read-over-read method, which expands the reading field from chunk to document. Specifically, RoR includes a chunk reader and a document reader. The former first predicts a set of regional answers for each chunk, which are then compacted into a highly-condensed version of the original document, guaranteeing to be encoded once. The latter further predicts the global answers from this condensed document. Eventually, a voting strategy is utilized to aggregate and rerank the regional and global answers for final prediction. Extensive experiments on two benchmarks QuAC and TriviaQA demonstrate the effectiveness of RoR for long document reading. Notably, RoR ranks 1st place on the QuAC leaderboard (https://quac.ai/) at the time of submission (May 17th, 2021).

Similar Work