Hallucinations In Large Multilingual Translation Models · The Large Language Model Bible Contribute to LLM-Bible

Hallucinations In Large Multilingual Translation Models

Guerreiro Nuno M., Alves Duarte, Waldendorf Jonas, Haddow Barry, Birch Alexandra, Colombo Pierre, Martins André F. T.. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Prompting Reinforcement Learning Responsible AI

Large-scale multilingual machine translation systems have demonstrated remarkable ability to translate directly between numerous languages, making them increasingly appealing for real-world applications. However, when deployed in the wild, these models may generate hallucinated translations which have the potential to severely undermine user trust and raise safety concerns. Existing research on hallucinations has primarily focused on small bilingual models trained on high-resource languages, leaving a gap in our understanding of hallucinations in massively multilingual models across diverse translation scenarios. In this work, we fill this gap by conducting a comprehensive analysis on both the M2M family of conventional neural machine translation models and ChatGPT, a general-purpose large language model~(LLM) that can be prompted for translation. Our investigation covers a broad spectrum of conditions, spanning over 100 translation directions across various resource levels and going beyond English-centric language pairs. We provide key insights regarding the prevalence, properties, and mitigation of hallucinations, paving the way towards more responsible and reliable machine translation systems.

Similar Work