David Helps Goliath: Inference-time Collaboration Between Small Specialized And Large General Diffusion Lms · The Large Language Model Bible Contribute to LLM-Bible

David Helps Goliath: Inference-time Collaboration Between Small Specialized And Large General Diffusion Lms

Han Xiaochuang, Kumar Sachin, Tsvetkov Yulia, Ghazvininejad Marjan. Arxiv 2023

[Paper]    
Efficiency And Optimization GPT Language Modeling Merging Pretraining Methods Training Techniques

Diffusion-based language models are emerging as a promising alternative to autoregressive LMs: they approach the competence of autoregressive LMs while offering nuanced controllability at inference time. While autoregressive LMs have benefited immensely from scaling and instruction-based learning, existing studies of diffusion LMs have been conducted on a smaller scale. Starting with a recently proposed diffusion model SSD-LM, in this work we first explore methods to scale it from 0.4B to 13B parameters, proposing techniques to improve its training and inference efficiency, and to finetune the model to follow instructions. Armed with a more powerful, general purpose diffusion LM, we introduce the primary contribution of this work – SSD-2 – an approach to easily ensemble at inference time a large general-purpose diffusion LM with smaller, but specialized and contextualized diffusion LMs. We show that SSD-2 facilitates novel ensembles with 100x smaller models that can be customized and deployed by individual users. We find that compared to autoregressive models, the collaboration between diffusion LMs is more effective, leading to higher-quality model responses due to their ability to dynamically incorporate bi-directional contexts.

Similar Work