Hard-coded Gaussian Attention For Neural Machine Translation · The Large Language Model Bible Contribute to LLM-Bible

Hard-coded Gaussian Attention For Neural Machine Translation

You Weiqiu, Sun Simeng, Iyyer Mohit. Arxiv 2020

[Paper]    
Applications Attention Mechanism Model Architecture Pretraining Methods Transformer

Recent work has questioned the importance of the Transformer’s multi-headed attention for achieving high translation quality. We push further in this direction by developing a “hard-coded” attention variant without any learned parameters. Surprisingly, replacing all learned self-attention heads in the encoder and decoder with fixed, input-agnostic Gaussian distributions minimally impacts BLEU scores across four different language pairs. However, additionally hard-coding cross attention (which connects the decoder to the encoder) significantly lowers BLEU, suggesting that it is more important than self-attention. Much of this BLEU drop can be recovered by adding just a single learned cross attention head to an otherwise hard-coded Transformer. Taken as a whole, our results offer insight into which components of the Transformer are actually important, which we hope will guide future work into the development of simpler and more efficient attention-based models.

Similar Work