Benchmarking Llm-based Machine Translation On Cultural Awareness · The Large Language Model Bible Contribute to LLM-Bible

Benchmarking Llm-based Machine Translation On Cultural Awareness

Yao Binwei, Jiang Ming, Yang Diyi, Hu Junjie. Arxiv 2023

[Paper]    
Applications GPT In Context Learning Interpretability And Explainability Model Architecture Prompting

Translating cultural-specific content is crucial for effective cross-cultural communication. However, many MT systems still struggle to translate sentences containing cultural-specific entities accurately and understandably. Recent advancements in in-context learning utilize lightweight prompts to guide large language models (LLMs) in machine translation tasks. Nevertheless, the effectiveness of this approach in enhancing machine translation with cultural awareness remains uncertain. To address this gap, we introduce a new data curation pipeline to construct a culturally relevant parallel corpus, enriched with annotations of cultural-specific items. Furthermore, we devise a novel evaluation metric to assess the understandability of translations in a reference-free manner by GPT-4. We evaluate a variety of neural machine translation (NMT) and LLM-based MT systems using our dataset. Additionally, we propose several prompting strategies for LLMs to incorporate external and internal cultural knowledge into the translation process. Our results demonstrate that eliciting explanations can significantly enhance the understandability of cultural-specific entities, especially those without well-known translations.

Similar Work