Towards Reinforcement Learning For Pivot-based Neural Machine Translation With Non-autoregressive Transformer · The Large Language Model Bible Contribute to LLM-Bible

Towards Reinforcement Learning For Pivot-based Neural Machine Translation With Non-autoregressive Transformer

Tokarchuk Evgeniia, Rosendahl Jan, Wang Weiyue, Petrushkov Pavel, Lancewicki Tomer, Khadivi Shahram, Ney Hermann. Arxiv 2021

[Paper]    
Agentic Applications GPT Language Modeling Model Architecture Pretraining Methods Reinforcement Learning Training Techniques Transformer

Pivot-based neural machine translation (NMT) is commonly used in low-resource setups, especially for translation between non-English language pairs. It benefits from using high resource source-pivot and pivot-target language pairs and an individual system is trained for both sub-tasks. However, these models have no connection during training, and the source-pivot model is not optimized to produce the best translation for the source-target task. In this work, we propose to train a pivot-based NMT system with the reinforcement learning (RL) approach, which has been investigated for various text generation tasks, including machine translation (MT). We utilize a non-autoregressive transformer and present an end-to-end pivot-based integrated model, enabling training on source-target data.

Similar Work