Learning To Answer By Learning To Ask: Getting The Best Of GPT-2 And BERT Worlds · The Large Language Model Bible Contribute to LLM-Bible

Learning To Answer By Learning To Ask: Getting The Best Of GPT-2 And BERT Worlds

Klein Tassilo, Nabi Moin. Arxiv 2019

[Paper]    
Applications Attention Mechanism BERT GPT Model Architecture Pretraining Methods Reinforcement Learning Tools Training Techniques Transformer

Automatic question generation aims at the generation of questions from a context, with the corresponding answers being sub-spans of the given passage. Whereas, most of the methods mostly rely on heuristic rules to generate questions, more recently also neural network approaches have been proposed. In this work, we propose a variant of the self-attention Transformer network architectures model to generate meaningful and diverse questions. To this end, we propose an easy to use model consisting of the conjunction of the Transformer decoder GPT-2 model with Transformer encoder BERT for the downstream task for question answering. The model is trained in an end-to-end fashion, where the language model is trained to produce a question-answer-aware input representation that facilitates to generate an answer focused question. Our result of neural question generation from text on the SQuAD 1.1 dataset suggests that our method can produce semantically correct and diverse questions. Additionally, we assessed the performance of our proposed method for the downstream task of question answering. The analysis shows that our proposed generation & answering collaboration framework relatively improves both tasks and is particularly powerful in the semi-supervised setup. The results further suggest a robust and comparably lean pipeline facilitating question generation in the small-data regime.

Similar Work