GPT-4 Vs. Human Translators: A Comprehensive Evaluation Of Translation Quality Across Languages, Domains, And Expertise Levels · The Large Language Model Bible Contribute to LLM-Bible

GPT-4 Vs. Human Translators: A Comprehensive Evaluation Of Translation Quality Across Languages, Domains, And Expertise Levels

Yan Jianhao, Yan Pingchuan, Chen Yulong, Li Judy, Zhu Xianchao, Zhang Yue. Arxiv 2024

[Paper]    
GPT Model Architecture

This study comprehensively evaluates the translation quality of Large Language Models (LLMs), specifically GPT-4, against human translators of varying expertise levels across multiple language pairs and domains. Through carefully designed annotation rounds, we find that GPT-4 performs comparably to junior translators in terms of total errors made but lags behind medium and senior translators. We also observe the imbalanced performance across different languages and domains, with GPT-4’s translation capability gradually weakening from resource-rich to resource-poor directions. In addition, we qualitatively study the translation given by GPT-4 and human translators, and find that GPT-4 translator suffers from literal translations, but human translators sometimes overthink the background information. To our knowledge, this study is the first to evaluate LLMs against human translators and analyze the systematic differences between their outputs, providing valuable insights into the current state of LLM-based translation and its potential limitations.

Similar Work