Bridging The Gap Between Language Model And Reading Comprehension: Unsupervised MRC Via Self-supervision · The Large Language Model Bible Contribute to LLM-Bible

Bridging The Gap Between Language Model And Reading Comprehension: Unsupervised MRC Via Self-supervision

Bian Ning, Han Xianpei, Chen Bo, Lin Hongyu, He Ben, Sun Le. Arxiv 2021

[Paper]    
Masked Language Model Pretraining Methods Tools Training Techniques

Despite recent success in machine reading comprehension (MRC), learning high-quality MRC models still requires large-scale labeled training data, even using strong pre-trained language models (PLMs). The pre-training tasks for PLMs are not question-answering or MRC-based tasks, making existing PLMs unable to be directly used for unsupervised MRC. Specifically, MRC aims to spot an accurate answer span from the given document, but PLMs focus on token filling in sentences. In this paper, we propose a new framework for unsupervised MRC. Firstly, we propose to learn to spot answer spans in documents via self-supervised learning, by designing a self-supervision pretext task for MRC

Similar Work