The Unreasonable Volatility Of Neural Machine Translation Models · The Large Language Model Bible Contribute to LLM-Bible

The Unreasonable Volatility Of Neural Machine Translation Models

Fadaee Marzieh, Monz Christof. Arxiv 2020

[Paper]    
Applications Model Architecture Pretraining Methods Reinforcement Learning Transformer

Recent works have shown that Neural Machine Translation (NMT) models achieve impressive performance, however, questions about understanding the behavior of these models remain unanswered. We investigate the unexpected volatility of NMT models where the input is semantically and syntactically correct. We discover that with trivial modifications of source sentences, we can identify cases where \textit{unexpected changes} happen in the translation and in the worst case lead to mistranslations. This volatile behavior of translating extremely similar sentences in surprisingly different ways highlights the underlying generalization problem of current NMT models. We find that both RNN and Transformer models display volatile behavior in 26% and 19% of sentence variations, respectively.

Similar Work