Mixed-distil-bert: Code-mixed Language Modeling For Bangla, English, And Hindi · The Large Language Model Bible Contribute to LLM-Bible

Mixed-distil-bert: Code-mixed Language Modeling For Bangla, English, And Hindi

Raihan Md Nishat, Goswami Dhiman, Mahmud Antara. Arxiv 2023

[Paper]    
BERT Language Modeling Model Architecture Reinforcement Learning Training Techniques

One of the most popular downstream tasks in the field of Natural Language Processing is text classification. Text classification tasks have become more daunting when the texts are code-mixed. Though they are not exposed to such text during pre-training, different BERT models have demonstrated success in tackling Code-Mixed NLP challenges. Again, in order to enhance their performance, Code-Mixed NLP models have depended on combining synthetic data with real-world data. It is crucial to understand how the BERT models’ performance is impacted when they are pretrained using corresponding code-mixed languages. In this paper, we introduce Tri-Distil-BERT, a multilingual model pre-trained on Bangla, English, and Hindi, and Mixed-Distil-BERT, a model fine-tuned on code-mixed data. Both models are evaluated across multiple NLP tasks and demonstrate competitive performance against larger models like mBERT and XLM-R. Our two-tiered pre-training approach offers efficient alternatives for multilingual and code-mixed language understanding, contributing to advancements in the field.

Similar Work