Beneath The Surface Of Consistency: Exploring Cross-lingual Knowledge Representation Sharing In Llms · The Large Language Model Bible Contribute to LLM-Bible

Beneath The Surface Of Consistency: Exploring Cross-lingual Knowledge Representation Sharing In Llms

Ifergan Maxim, Choshen Leshem, Aharoni Roee, Szpektor Idan, Abend Omri. Arxiv 2024

[Paper]    
RAG Reinforcement Learning Uncategorized

The veracity of a factoid is largely independent of the language it is written in. However, language models are inconsistent in their ability to answer the same factual question across languages. This raises questions about how LLMs represent a given fact across languages. We explore multilingual factual knowledge through two aspects: the model’s ability to answer a query consistently across languages, and the ability to ‘‘store’’ answers in a shared representation for several languages. We propose a methodology to measure the extent of representation sharing across languages by repurposing knowledge editing methods. We examine LLMs with various multilingual configurations using a new multilingual dataset. We reveal that high consistency does not necessarily imply shared representation, particularly for languages with different scripts. Moreover, we find that script similarity is a dominant factor in representation sharing. Finally, we observe that if LLMs could fully share knowledge across languages, their accuracy in their best-performing language could benefit an increase of up to 150% on average. These findings highlight the need for improved multilingual knowledge representation in LLMs and suggest a path for the development of more robust and consistent multilingual LLMs.

Similar Work