Improving BERT Fine-tuning Via Self-ensemble And Self-distillation · The Large Language Model Bible Contribute to LLM-Bible

Improving BERT Fine-tuning Via Self-ensemble And Self-distillation

Xu Yige, Qiu Xipeng, Zhou Ligao, Huang Xuanjing. Arxiv 2020

[Paper]    
BERT Distillation Efficiency And Optimization Fine Tuning Model Architecture Pretraining Methods RAG Training Techniques

Fine-tuning pre-trained language models like BERT has become an effective way in NLP and yields state-of-the-art results on many downstream tasks. Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure, re-designing the pre-train tasks, and leveraging external data and knowledge. The fine-tuning strategy itself has yet to be fully explored. In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation. The experiments on text classification and natural language inference tasks show our proposed methods can significantly improve the adaption of BERT without any external data or knowledge.

Similar Work