Generating Diverse Translation By Manipulating Multi-head Attention · The Large Language Model Bible Contribute to LLM-Bible

Generating Diverse Translation By Manipulating Multi-head Attention

Sun Zewei, Huang Shujian, Wei Hao-ran, Dai Xin-yu, Chen Jiajun. Arxiv 2019

[Paper]    
Applications Attention Mechanism Model Architecture Pretraining Methods Transformer

Transformer model has been widely used on machine translation tasks and obtained state-of-the-art results. In this paper, we report an interesting phenomenon in its encoder-decoder multi-head attention: different attention heads of the final decoder layer align to different word translation candidates. We empirically verify this discovery and propose a method to generate diverse translations by manipulating heads. Furthermore, we make use of these diverse translations with the back-translation technique for better data augmentation. Experiment results show that our method generates diverse translations without severe drop in translation quality. Experiments also show that back-translation with these diverse translations could bring significant improvement on performance on translation tasks. An auxiliary experiment of conversation response generation task proves the effect of diversity as well.

Similar Work