How Chinese Are Chinese Language Models? The Puzzling Lack Of Language Policy In China's Llms · The Large Language Model Bible Contribute to LLM-Bible

How Chinese Are Chinese Language Models? The Puzzling Lack Of Language Policy In China's Llms

Wen-yi Andrea W, Jo Unso Eun Seo, Lin Lu Jia, Mimno David. Arxiv 2024

[Paper]    
Pretraining Methods RAG Reinforcement Learning Training Techniques

Contemporary language models are increasingly multilingual, but Chinese LLM developers must navigate complex political and business considerations of language diversity. Language policy in China aims at influencing the public discourse and governing a multi-ethnic society, and has gradually transitioned from a pluralist to a more assimilationist approach since 1949. We explore the impact of these influences on current language technology. We evaluate six open-source multilingual LLMs pre-trained by Chinese companies on 18 languages, spanning a wide range of Chinese, Asian, and Anglo-European languages. Our experiments show Chinese LLMs performance on diverse languages is indistinguishable from international LLMs. Similarly, the models’ technical reports also show lack of consideration for pretraining data language coverage except for English and Mandarin Chinese. Examining Chinese AI policy, model experiments, and technical reports, we find no sign of any consistent policy, either for or against, language diversity in China’s LLM development. This leaves a puzzling fact that while China regulates both the languages people use daily as well as language model development, they do not seem to have any policy on the languages in language models.

Similar Work