In What Languages Are Generative Language Models The Most Formal? Analyzing Formality Distribution Across Languages · The Large Language Model Bible Contribute to LLM-Bible

In What Languages Are Generative Language Models The Most Formal? Analyzing Formality Distribution Across Languages

Ersoy Asım, Vizcarra Gerson, Mayeesha Tasmiah Tahsin, Muller Benjamin. Arxiv 2023

[Paper]    
Ethics And Bias Prompting

Multilingual generative language models (LMs) are increasingly fluent in a large variety of languages. Trained on the concatenation of corpora in multiple languages, they enable powerful transfer from high-resource languages to low-resource ones. However, it is still unknown what cultural biases are induced in the predictions of these models. In this work, we focus on one language property highly influenced by culture: formality. We analyze the formality distributions of XGLM and BLOOM’s predictions, two popular generative multilingual language models, in 5 languages. We classify 1,200 generations per language as formal, informal, or incohesive and measure the impact of the prompt formality on the predictions. Overall, we observe a diversity of behaviors across the models and languages. For instance, XGLM generates informal text in Arabic and Bengali when conditioned with informal prompts, much more than BLOOM. In addition, even though both models are highly biased toward the formal style when prompted neutrally, we find that the models generate a significant amount of informal predictions even when prompted with formal text. We release with this work 6,000 annotated samples, paving the way for future work on the formality of generative multilingual LMs.

Similar Work