Cheap And Good? Simple And Effective Data Augmentation For Low Resource Machine Reading · The Large Language Model Bible Contribute to LLM-Bible

Cheap And Good? Simple And Effective Data Augmentation For Low Resource Machine Reading

Van Hoang, Yadav Vikas, Surdeanu Mihai. Arxiv 2021

[Paper]    
BERT Model Architecture Training Techniques

We propose a simple and effective strategy for data augmentation for low-resource machine reading comprehension (MRC). Our approach first pretrains the answer extraction components of a MRC system on the augmented data that contains approximate context of the correct answers, before training it on the exact answer spans. The approximate context helps the QA method components in narrowing the location of the answers. We demonstrate that our simple strategy substantially improves both document retrieval and answer extraction performance by providing larger context of the answers and additional training data. In particular, our method significantly improves the performance of BERT based retriever (15.12%), and answer extractor (4.33% F1) on TechQA, a complex, low-resource MRC task. Further, our data augmentation strategy yields significant improvements of up to 3.9% exact match (EM) and 2.7% F1 for answer extraction on PolicyQA, another practical but moderate sized QA dataset that also contains long answer spans.

Similar Work