Robustly Optimized And Distilled Training For Natural Language Understanding · The Large Language Model Bible Contribute to LLM-Bible

Robustly Optimized And Distilled Training For Natural Language Understanding

Elfadeel Haytham, Peshterliev Stan. Arxiv 2021

[Paper]    
Applications Distillation Efficiency And Optimization Model Architecture Pretraining Methods Tools Training Techniques Transformer

In this paper, we explore multi-task learning (MTL) as a second pretraining step to learn enhanced universal language representation for transformer language models. We use the MTL enhanced representation across several natural language understanding tasks to improve performance and generalization. Moreover, we incorporate knowledge distillation (KD) in MTL to further boost performance and devise a KD variant that learns effectively from multiple teachers. By combining MTL and KD, we propose Robustly Optimized and Distilled (ROaD) modeling framework. We use ROaD together with the ELECTRA model to obtain state-of-the-art results for machine reading comprehension and natural language inference.

Similar Work