How Do Large Language Models Handle Multilingualism? · The Large Language Model Bible Contribute to LLM-Bible

How Do Large Language Models Handle Multilingualism?

Zhao Yiran, Zhang Wenxuan, Chen Guizhen, Kawaguchi Kenji, Bing Lidong. Arxiv 2024

[Paper]    
Attention Mechanism Fine Tuning Model Architecture Pretraining Methods RAG Training Techniques Transformer

Large language models (LLMs) have demonstrated impressive capabilities across diverse languages. This study explores how LLMs handle multilingualism. Based on observed language ratio shifts among layers and the relationships between network structures and certain capabilities, we hypothesize the LLM’s multilingual workflow (\(\texttt{MWork}\)): LLMs initially understand the query, converting multilingual inputs into English for task-solving. In the intermediate layers, they employ English for thinking and incorporate multilingual knowledge with self-attention and feed-forward structures, respectively. In the final layers, LLMs generate responses aligned with the original language of the query. To verify \(\texttt{MWork}\), we introduce Parallel Language-specific Neuron Detection (\(\texttt{PLND}\)) to identify activated neurons for inputs in different languages without any labeled data. Using \(\texttt{PLND}\), we validate \(\texttt{MWork}\) through extensive experiments involving the deactivation of language-specific neurons across various layers and structures. Moreover, \(\texttt{MWork}\) allows fine-tuning of language-specific neurons with a small dataset, enhancing multilingual abilities in a specific language without compromising others. This approach results in an average improvement of \(3.6%\) for high-resource languages and \(2.3%\) for low-resource languages across all tasks with just \(400\) documents.

Similar Work