Larger-scale Transformers For Multilingual Masked Language Modeling · The Large Language Model Bible Contribute to LLM-Bible

Larger-scale Transformers For Multilingual Masked Language Modeling

Goyal Naman, Du Jingfei, Ott Myle, Anantharaman Giri, Conneau Alexis. Arxiv 2021

[Paper]    
BERT Language Modeling Masked Language Model Model Architecture Pretraining Methods RAG Training Techniques Transformer

Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.

Similar Work