Towards Truthful Multilingual Large Language Models: Benchmarking And Alignment Strategies · The Large Language Model Bible Contribute to LLM-Bible

Towards Truthful Multilingual Large Language Models: Benchmarking And Alignment Strategies

Liu Weihao, Wu Ning, Ding Wenbiao, Liang Shining, Gong Ming, Zhang Dongmei. Arxiv 2024

[Paper]    
Reinforcement Learning Uncategorized

In the era of large language models (LLMs), building multilingual large language models (MLLMs) that can serve users worldwide holds great significance. However, existing research seldom focuses on the truthfulness of MLLMs. Meanwhile, contemporary multilingual aligning technologies struggle to balance massive languages and often exhibit serious truthfulness gaps across different languages, especially those that differ greatly from English. In our work, we construct a benchmark for truthfulness evaluation in multilingual scenarios and explore the ways to align facts across languages to enhance the truthfulness of MLLMs. Furthermore, we propose Fact-aware Multilingual Selective Synergy (FaMSS) to optimize the data allocation across a large number of languages and different data types. Experimental results demonstrate that our approach can effectively reduce the multilingual representation disparity and enhance the multilingual capabilities of LLMs.

Similar Work