Non-autoregressive Streaming Transformer For Simultaneous Translation · The Large Language Model Bible Contribute to LLM-Bible

Non-autoregressive Streaming Transformer For Simultaneous Translation

Ma Zhengrui, Zhang Shaolei, Guo Shoutao, Shao Chenze, Zhang Min, Feng Yang. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Pretraining Methods Training Techniques Transformer

Simultaneous machine translation (SiMT) models are trained to strike a balance between latency and translation quality. However, training these models to achieve high quality while maintaining low latency often leads to a tendency for aggressive anticipation. We argue that such issue stems from the autoregressive architecture upon which most existing SiMT models are built. To address those issues, we propose non-autoregressive streaming Transformer (NAST) which comprises a unidirectional encoder and a non-autoregressive decoder with intra-chunk parallelism. We enable NAST to generate the blank token or repetitive tokens to adjust its READ/WRITE strategy flexibly, and train it to maximize the non-monotonic latent alignment with an alignment-based latency loss. Experiments on various SiMT benchmarks demonstrate that NAST outperforms previous strong autoregressive SiMT baselines.

Similar Work