Transquest At WMT2020: Sentence-level Direct Assessment · The Large Language Model Bible Contribute to LLM-Bible

Transquest At WMT2020: Sentence-level Direct Assessment

Ranasinghe Tharindu, Orasan Constantin, Mitkov Ruslan. Arxiv 2020

[Paper]    
Model Architecture Pretraining Methods Tools Transformer

This paper presents the team TransQuest’s participation in Sentence-Level Direct Assessment shared task in WMT 2020. We introduce a simple QE framework based on cross-lingual transformers, and we use it to implement and evaluate two different neural architectures. The proposed methods achieve state-of-the-art results surpassing the results obtained by OpenKiwi, the baseline used in the shared task. We further fine tune the QE framework by performing ensemble and data augmentation. Our approach is the winning solution in all of the language pairs according to the WMT 2020 official results.

Similar Work