Diffuseq: Sequence To Sequence Text Generation With Diffusion Models · The Large Language Model Bible Contribute to LLM-Bible

Diffuseq: Sequence To Sequence Text Generation With Diffusion Models

Gong Shansan, Li Mukai, Feng Jiangtao, Wu Zhiyong, Kong Lingpeng. Arxiv 2022

[Paper] [Code]    
Applications GPT Has Code Language Modeling Merging Pretraining Methods Reinforcement Learning

Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks. Code is available at \url{https://github.com/Shark-NLP/DiffuSeq}

Similar Work