UT5: Pretraining Non Autoregressive T5 With Unrolled Denoising · The Large Language Model Bible Contribute to LLM-Bible

UT5: Pretraining Non Autoregressive T5 With Unrolled Denoising

Salem Mahmoud G., Ye Jiayu, Lin Chu-cheng, Liu Frederick. Arxiv 2023

[Paper]    
GPT Language Modeling Model Architecture Pretraining Methods Training Techniques Transformer

Recent advances in Transformer-based Large Language Models have made great strides in natural language generation. However, to decode K tokens, an autoregressive model needs K sequential forward passes, which may be a performance bottleneck for large language models. Many non-autoregressive (NAR) research are aiming to address this sequentiality bottleneck, albeit many have focused on a dedicated architecture in supervised benchmarks. In this work, we studied unsupervised pretraining for non auto-regressive T5 models via unrolled denoising and shown its SoTA results in downstream generation tasks such as SQuAD question generation and XSum.

Similar Work