X-instruction: Aligning Language Model In Low-resource Languages With Self-curated Cross-lingual Instructions · The Large Language Model Bible Contribute to LLM-Bible

X-instruction: Aligning Language Model In Low-resource Languages With Self-curated Cross-lingual Instructions

Li Chong, Yang Wen, Zhang Jiajun, Lu Jinliang, Wang Shaonan, Zong Chengqing. Arxiv 2024

[Paper]    
GPT Model Architecture Uncategorized

Large language models respond well in high-resource languages like English but struggle in low-resource languages. It may arise from the lack of high-quality instruction following data in these languages. Directly translating English samples into these languages can be a solution but unreliable, leading to responses with translation errors and lacking language-specific or cultural knowledge. To address this issue, we propose a novel method to construct cross-lingual instruction following samples with instruction in English and response in low-resource languages. Specifically, the language model first learns to generate appropriate English instructions according to the natural web texts in other languages as responses. The candidate cross-lingual instruction tuning samples are further refined and diversified. We have employed this method to build a large-scale cross-lingual instruction tuning dataset on 10 languages, namely X-Instruction. The instruction data built using our method incorporate more language-specific knowledge compared with the naive translation method. Experimental results have shown that the response quality of the model tuned on X-Instruction greatly exceeds the model distilled from a powerful teacher model, reaching or even surpassing the ones of ChatGPT. In addition, we find that models tuned on cross-lingual instruction following samples can follow the instruction in the output language without further tuning.

Similar Work