Moe-ct: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting · The Large Language Model Bible Contribute to LLM-Bible

Moe-ct: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting

Li Tianhao, Li Shangjie, Xie Binbin, Xiong Deyi, Yang Baosong. Arxiv 2024

[Paper]    
Fine Tuning Language Modeling Model Architecture Tools Training Techniques

The advent of large language models (LLMs) has predominantly catered to high-resource languages, leaving a disparity in performance for low-resource languages. Conventional Continual Training (CT) approaches to bridge this gap often undermine a model’s original linguistic proficiency when expanding to multilingual contexts. Addressing this issue, we introduce a novel MoE-CT architecture, a paradigm that innovatively separates the base model’s learning from the multilingual expansion process. Our design freezes the original LLM parameters, thus safeguarding its performance in high-resource languages, while an appended MoE module, trained on diverse language datasets, augments low-resource language proficiency. Our approach significantly outperforms conventional CT methods, as evidenced by our experiments, which show marked improvements in multilingual benchmarks without sacrificing the model’s original language performance. Moreover, our MoE-CT framework demonstrates enhanced resistance to forgetting and superior transfer learning capabilities. By preserving the base model’s integrity and focusing on strategic parameter expansion, our methodology advances multilingual language modeling and represents a significant step forward for low-resource language inclusion in LLMs, indicating a fruitful direction for future research in language technologies.

Similar Work