Getting More From Less: Large Language Models Are Good Spontaneous Multilingual Learners · The Large Language Model Bible Contribute to LLM-Bible

Getting More From Less: Large Language Models Are Good Spontaneous Multilingual Learners

Zhang Shimao, Gao Changjiang, Zhu Wenhao, Chen Jiajun, Huang Xin, Han Xue, Feng Junlan, Deng Chao, Huang Shujian. Arxiv 2024

[Paper]    
Fine Tuning Interpretability And Explainability RAG

Recently, Large Language Models (LLMs) have shown impressive language capabilities. While most of the existing LLMs have very unbalanced performance across different languages, multilingual alignment based on translation parallel data is an effective method to enhance the LLMs’ multilingual capabilities. In this work, we discover and comprehensively investigate the spontaneous multilingual alignment improvement of LLMs. We find that LLMs instruction-tuned on the question translation data (i.e. without annotated answers) are able to encourage the alignment between English and a wide range of languages, even including those unseen during instruction-tuning. Additionally, we utilize different settings and mechanistic interpretability methods to analyze the LLM’s performance in the multilingual scenario comprehensively. Our work suggests that LLMs have enormous potential for improving multilingual alignment efficiently with great language and task generalization.

Similar Work