Multimodal Attention For Neural Machine Translation · The Large Language Model Bible Contribute to LLM-Bible

Multimodal Attention For Neural Machine Translation

Caglayan Ozan, Barrault Loïc, Bougares Fethi. Arxiv 2016

[Paper]    
Applications Attention Mechanism Model Architecture Multimodal Models Transformer

The attention mechanism is an important part of the neural machine translation (NMT) where it was reported to produce richer source representation compared to fixed-length encoding sequence-to-sequence models. Recently, the effectiveness of attention has also been explored in the context of image captioning. In this work, we assess the feasibility of a multimodal attention mechanism that simultaneously focus over an image and its natural language description for generating a description in another language. We train several variants of our proposed attention mechanism on the Multi30k multilingual image captioning dataset. We show that a dedicated attention for each modality achieves up to 1.6 points in BLEU and METEOR compared to a textual NMT baseline.

Similar Work