Diverse Pretrained Context Encodings Improve Document Translation · The Large Language Model Bible Contribute to LLM-Bible

Diverse Pretrained Context Encodings Improve Document Translation

Donato Domenic, Yu Lei, Dyer Chris. ACL 2021

[Paper]    
Efficiency And Optimization Model Architecture Pretraining Methods Reinforcement Learning Training Techniques Transformer

We propose a new architecture for adapting a sentence-level sequence-to-sequence transformer by incorporating multiple pretrained document context signals and assess the impact on translation performance of (1) different pretraining approaches for generating these signals, (2) the quantity of parallel data for which document context is available, and (3) conditioning on source, target, or source and target contexts. Experiments on the NIST Chinese-English, and IWSLT and WMT English-German tasks support four general conclusions: that using pretrained context representations markedly improves sample efficiency, that adequate parallel data resources are crucial for learning to use document context, that jointly conditioning on multiple context representations outperforms any single representation, and that source context is more valuable for translation performance than target side context. Our best multi-context model consistently outperforms the best existing context-aware transformers.

Similar Work