Candidate Soups: Fusing Candidate Results Improves Translation Quality For Non-autoregressive Translation · The Large Language Model Bible Contribute to LLM-Bible

Candidate Soups: Fusing Candidate Results Improves Translation Quality For Non-autoregressive Translation

Zheng Huanran, Zhu Wei, Wang Pengfei, Wang Xiaoling. Arxiv 2023

[Paper]    
GPT Pretraining Methods

Non-autoregressive translation (NAT) model achieves a much faster inference speed than the autoregressive translation (AT) model because it can simultaneously predict all tokens during inference. However, its translation quality suffers from degradation compared to AT. And existing NAT methods only focus on improving the NAT model’s performance but do not fully utilize it. In this paper, we propose a simple but effective method called “Candidate Soups,” which can obtain high-quality translations while maintaining the inference speed of NAT models. Unlike previous approaches that pick the individual result and discard the remainders, Candidate Soups (CDS) can fully use the valuable information in the different candidate translations through model uncertainty. Extensive experiments on two benchmarks (WMT’14 EN-DE and WMT’16 EN-RO) demonstrate the effectiveness and generality of our proposed method, which can significantly improve the translation quality of various base models. More notably, our best variant outperforms the AT model on three translation tasks with 7.6 times speedup.

Similar Work