Towards Opening The Black Box Of Neural Machine Translation: Source And Target Interpretations Of The Transformer · The Large Language Model Bible Contribute to LLM-Bible

Towards Opening The Black Box Of Neural Machine Translation: Source And Target Interpretations Of The Transformer

Ferrando Javier, Gállego Gerard I., Alastruey Belen, Escolano Carlos, Costa-jussà Marta R.. Arxiv 2022

[Paper]    
Applications Interpretability And Explainability Model Architecture Pretraining Methods Transformer

In Neural Machine Translation (NMT), each token prediction is conditioned on the source sentence and the target prefix (what has been previously translated at a decoding step). However, previous work on interpretability in NMT has mainly focused solely on source sentence tokens’ attributions. Therefore, we lack a full understanding of the influences of every input token (source sentence and target prefix) in the model predictions. In this work, we propose an interpretability method that tracks input tokens’ attributions for both contexts. Our method, which can be extended to any encoder-decoder Transformer-based model, allows us to better comprehend the inner workings of current NMT models. We apply the proposed method to both bilingual and multilingual Transformers and present insights into their behaviour.

Similar Work