Cometkiwi: Ist-unbabel 2022 Submission For The Quality Estimation Shared Task · The Large Language Model Bible Contribute to LLM-Bible

Cometkiwi: Ist-unbabel 2022 Submission For The Quality Estimation Shared Task

Rei Ricardo, Treviso Marcos, Guerreiro Nuno M., Zerva Chrysoula, Farinha Ana C., Maroti Christine, De Souza José G. C., Glushkova Taisiya, Alves Duarte M., Lavie Alon, Coheur Luisa, Martins André F. T.. Arxiv 2022

[Paper]    
Attention Mechanism Interpretability And Explainability Model Architecture Pretraining Methods Tools Training Techniques

We present the joint contribution of IST and Unbabel to the WMT 2022 Shared Task on Quality Estimation (QE). Our team participated on all three subtasks: (i) Sentence and Word-level Quality Prediction; (ii) Explainable QE; and (iii) Critical Error Detection. For all tasks we build on top of the COMET framework, connecting it with the predictor-estimator architecture of OpenKiwi, and equipping it with a word-level sequence tagger and an explanation extractor. Our results suggest that incorporating references during pretraining improves performance across several language pairs on downstream tasks, and that jointly training with sentence and word-level objectives yields a further boost. Furthermore, combining attention and gradient information proved to be the top strategy for extracting good explanations of sentence-level QE models. Overall, our submissions achieved the best results for all three tasks for almost all language pairs by a considerable margin.

Similar Work