Training Deeper Neural Machine Translation Models With Transparent Attention · The Large Language Model Bible Contribute to LLM-Bible

Training Deeper Neural Machine Translation Models With Transparent Attention

Bapna Ankur, Chen Mia Xu, Firat Orhan, Cao Yuan, Wu Yonghui. Arxiv 2018

[Paper]    
Applications Attention Mechanism Efficiency And Optimization Model Architecture Pretraining Methods Training Techniques Transformer

While current state-of-the-art NMT models, such as RNN seq2seq and Transformers, possess a large number of parameters, they are still shallow in comparison to convolutional models used for both text and vision applications. In this work we attempt to train significantly (2-3x) deeper Transformer and Bi-RNN encoders for machine translation. We propose a simple modification to the attention mechanism that eases the optimization of deeper models, and results in consistent gains of 0.7-1.1 BLEU on the benchmark WMT’14 English-German and WMT’15 Czech-English tasks for both architectures.

Similar Work