Rethinking LLM Language Adaptation: A Case Study On Chinese Mixtral · The Large Language Model Bible Contribute to LLM-Bible

Rethinking LLM Language Adaptation: A Case Study On Chinese Mixtral

Cui Yiming, Yao Xin. Arxiv 2024

[Paper] [Code]    
Attention Mechanism Fine Tuning Has Code Model Architecture Pretraining Methods Reinforcement Learning Training Techniques

Mixtral, a representative sparse mixture of experts (SMoE) language model, has received significant attention due to its unique model design and superior performance. Based on Mixtral-8x7B-v0.1, in this paper, we propose Chinese-Mixtral and Chinese-Mixtral-Instruct with improved Chinese language abilities by adopting further pre-training and instruction fine-tuning. Experimental results show that our Chinese-Mixtral and Chinese-Mixtral-Instruct successfully improve Chinese understanding and generation performance while retaining the original English abilities. Then, we discuss several key questions when performing language adaptation on large language models, including the necessity of extending the language-specific vocabulary and the choice of the initialization model (foundation model v.s. instruction model), by providing empirical results and analysis. We also present the visualizations of each expert to examine their importance on downstream tasks. Our resources are publicly available through \url{https://github.com/ymcui/Chinese-Mixtral}.

Similar Work