Making Neural Machine Reading Comprehension Faster · The Large Language Model Bible Contribute to LLM-Bible

Making Neural Machine Reading Comprehension Faster

Chatterjee Debajyoti. Arxiv 2019

[Paper]    
Applications BERT Distillation Efficiency And Optimization Model Architecture

This study aims at solving the Machine Reading Comprehension problem where questions have to be answered given a context passage. The challenge is to develop a computationally faster model which will have improved inference time. State of the art in many natural language understanding tasks, BERT model, has been used and knowledge distillation method has been applied to train two smaller models. The developed models are compared with other models which have been developed with the same intention.

Similar Work