Paying More Attention To Source Context: Mitigating Unfaithful Translations From Large Language Model · The Large Language Model Bible Contribute to LLM-Bible

Paying More Attention To Source Context: Mitigating Unfaithful Translations From Large Language Model

Zhang Hongbin, Chen Kehai, Bai Xuefeng, Xiang Yang, Zhang Min. Arxiv 2024

[Paper]    
Attention Mechanism Ethics And Bias Model Architecture Prompting RAG

Large language models (LLMs) have showcased impressive multilingual machine translation ability. However, unlike encoder-decoder style models, decoder-only LLMs lack an explicit alignment between source and target contexts. Analyzing contribution scores during generation processes revealed that LLMs can be biased towards previously generated tokens over corresponding source tokens, leading to unfaithful translations. To address this issue, we propose to encourage LLMs to pay more attention to the source context from both source and target perspectives in zeroshot prompting: 1) adjust source context attention weights; 2) suppress irrelevant target prefix influence; Additionally, we propose 3) avoiding over-reliance on the target prefix in instruction tuning. Experimental results from both human-collected unfaithfulness test sets focusing on LLM-generated unfaithful translations and general test sets, verify our methods’ effectiveness across multiple language pairs. Further human evaluation shows our method’s efficacy in reducing hallucinatory translations and facilitating faithful translation generation.

Similar Work