Fine-tuning Large Language Models To Translate: Will A Touch Of Noisy Data In Misaligned Languages Suffice? · The Large Language Model Bible Contribute to LLM-Bible

Fine-tuning Large Language Models To Translate: Will A Touch Of Noisy Data In Misaligned Languages Suffice?

Zhu Dawei, Chen Pinzhen, Zhang Miaoran, Haddow Barry, Shen Xiaoyu, Klakow Dietrich. Arxiv 2024

[Paper]    
Applications Ethics And Bias Fine Tuning Pretraining Methods Training Techniques

Traditionally, success in multilingual machine translation can be attributed to three key factors in training data: large volume, diverse translation directions, and high quality. In the current practice of fine-tuning large language models (LLMs) for translation, we revisit the importance of all these factors. We find that LLMs display strong translation capability after being fine-tuned on as few as 32 training instances, and that fine-tuning on a single translation direction effectively enables LLMs to translate in multiple directions. However, the choice of direction is critical: fine-tuning LLMs with English on the target side can lead to task misinterpretation, which hinders translations into non-English languages. A similar problem arises when noise is introduced into the target side of parallel data, especially when the target language is well-represented in the LLM’s pre-training. In contrast, noise in an under-represented language has a less pronounced effect. Our findings suggest that attaining successful alignment hinges on teaching the model to maintain a “superficial” focus, thereby avoiding the learning of erroneous biases beyond translation.

Similar Work