MC-BERT: Efficient Language Pre-training Via A Meta Controller · The Large Language Model Bible Contribute to LLM-Bible

MC-BERT: Efficient Language Pre-training Via A Meta Controller

Xu Zhenhui, Gong Linyuan, Ke Guolin, He Di, Zheng Shuxin, Wang Liwei, Bian Jiang, Liu Tie-yan. Arxiv 2020

[Paper]    
Applications BERT Efficiency And Optimization Language Modeling Masked Language Model Model Architecture Pretraining Methods Reinforcement Learning Tools Training Techniques

Pre-trained contextual representations (e.g., BERT) have become the foundation to achieve state-of-the-art results on many NLP tasks. However, large-scale pre-training is computationally expensive. ELECTRA, an early attempt to accelerate pre-training, trains a discriminative model that predicts whether each input token was replaced by a generator. Our studies reveal that ELECTRA’s success is mainly due to its reduced complexity of the pre-training task: the binary classification (replaced token detection) is more efficient to learn than the generation task (masked language modeling). However, such a simplified task is less semantically informative. To achieve better efficiency and effectiveness, we propose a novel meta-learning framework, MC-BERT. The pre-training task is a multi-choice cloze test with a reject option, where a meta controller network provides training input and candidates. Results over GLUE natural language understanding benchmark demonstrate that our proposed method is both efficient and effective: it outperforms baselines on GLUE semantic tasks given the same computational budget.

Similar Work