End-to-end Synthetic Data Generation For Domain Adaptation Of Question Answering Systems · The Large Language Model Bible Contribute to LLM-Bible

End-to-end Synthetic Data Generation For Domain Adaptation Of Question Answering Systems

Shakeri Siamak, Santos Cicero Nogueira Dos, Zhu Henry, Ng Patrick, Nan Feng, Wang Zhiguo, Nallapati Ramesh, Xiang Bing. Arxiv 2020

[Paper]    
Applications Fine Tuning Model Architecture Pretraining Methods Training Techniques Transformer

We propose an end-to-end approach for synthetic QA data generation. Our model comprises a single transformer-based encoder-decoder network that is trained end-to-end to generate both answers and questions. In a nutshell, we feed a passage to the encoder and ask the decoder to generate a question and an answer token-by-token. The likelihood produced in the generation process is used as a filtering score, which avoids the need for a separate filtering model. Our generator is trained by fine-tuning a pretrained LM using maximum likelihood estimation. The experimental results indicate significant improvements in the domain adaptation of QA models outperforming current state-of-the-art methods.

Similar Work