Large Language Models "ad Referendum": How Good Are They At Machine Translation In The Legal Domain? · The Large Language Model Bible Contribute to LLM-Bible

Large Language Models "ad Referendum": How Good Are They At Machine Translation In The Legal Domain?

Briva-iglesias Vicent, Camargo Joao Lucas Cavalheiro, Dogru Gokhan. Arxiv 2024

[Paper]    
Applications GPT Model Architecture

This study evaluates the machine translation (MT) quality of two state-of-the-art large language models (LLMs) against a tradition-al neural machine translation (NMT) system across four language pairs in the legal domain. It combines automatic evaluation met-rics (AEMs) and human evaluation (HE) by professional transla-tors to assess translation ranking, fluency and adequacy. The re-sults indicate that while Google Translate generally outperforms LLMs in AEMs, human evaluators rate LLMs, especially GPT-4, comparably or slightly better in terms of producing contextually adequate and fluent translations. This discrepancy suggests LLMs’ potential in handling specialized legal terminology and context, highlighting the importance of human evaluation methods in assessing MT quality. The study underscores the evolving capabil-ities of LLMs in specialized domains and calls for reevaluation of traditional AEMs to better capture the nuances of LLM-generated translations.

Similar Work