Layer-wise Regularized Dropout For Neural Language Models · The Large Language Model Bible Contribute to LLM-Bible

Layer-wise Regularized Dropout For Neural Language Models

Ni Shiwen, Yang Min, Xu Ruifeng, Li Chengming, Hu Xiping. LREC-COLING 2024

[Paper]    
Applications Attention Mechanism Distillation Efficiency And Optimization Model Architecture Pretraining Methods Tools Training Techniques Transformer

Among the various pre-trained neural language models that are popular today, dropout is already an indispensable regularization technique. To solve the inconsistency between training and inference caused by the randomness of dropout, some studies use consistency training to regularize dropout at the output layer. In this paper, we propose a novel Layer-wise Regularized Dropout (LR-Drop), which is specially designed for Transformer-based Language models. Specifically, LR-Drop layer-wise regularizes each Transformer layer using the consistency training strategy. Each training sample passes through the two siamese sub-models sampled by dropout, and then LR-Drop forces the hidden states, multi-head attention matrices, and output distribution of the two siamese sub-models to be consistent. The proposed LR-Drop can be regarded as a “self-distillation” framework, in which each sub-model generated by dropout is the other’s “teacher” model and “student” model. Through extensive experiments on 8 natural language understanding datasets, 6 neural machine translation datasets, and 1 abstractive summarization dataset (a total of 15 datasets), we show that LR-Drop achieves superior performances, including state-of-the-art results.

Similar Work