Improving Neural Machine Translation With Pre-trained Representation · The Large Language Model Bible Contribute to LLM-Bible

Improving Neural Machine Translation With Pre-trained Representation

Weng Rongxiang, Yu Heng, Huang Shujian, Luo Weihua, Chen Jiajun. Arxiv 2019

[Paper]    
Applications Model Architecture Pretraining Methods RAG Tools Transformer

Monolingual data has been demonstrated to be helpful in improving the translation quality of neural machine translation (NMT). The current methods stay at the usage of word-level knowledge, such as generating synthetic parallel data or extracting information from word embedding. In contrast, the power of sentence-level contextual knowledge which is more complex and diverse, playing an important role in natural language generation, has not been fully exploited. In this paper, we propose a novel structure which could leverage monolingual data to acquire sentence-level contextual representations. Then, we design a framework for integrating both source and target sentence-level representations into NMT model to improve the translation quality. Experimental results on Chinese-English, German-English machine translation tasks show that our proposed model achieves improvement over strong Transformer baselines, while experiments on English-Turkish further demonstrate the effectiveness of our approach in the low-resource scenario.

Similar Work