Hello, It's GPT-2 -- How Can I Help You? Towards The Use Of Pretrained Language Models For Task-oriented Dialogue Systems · The Large Language Model Bible Contribute to LLM-Bible

Hello, It's GPT-2 -- How Can I Help You? Towards The Use Of Pretrained Language Models For Task-oriented Dialogue Systems

Paweł Budzianowski, Ivan Vulić. Arxiv 2019 – 117 citations

[Paper]    
Training Techniques GPT Pre-Training Fine-Tuning Tools Reinforcement Learning Agentic Language Modeling Model Architecture

Data scarcity is a long-standing and crucial challenge that hinders quick development of task-oriented dialogue systems across multiple domains: task-oriented dialogue models are expected to learn grammar, syntax, dialogue reasoning, decision making, and language generation from absurdly small amounts of task-specific data. In this paper, we demonstrate that recent progress in language modeling pre-training and transfer learning shows promise to overcome this problem. We propose a task-oriented dialogue model that operates solely on text input: it effectively bypasses explicit policy and language generation modules. Building on top of the TransferTransfo framework (Wolf et al., 2019) and generative model pre-training (Radford et al., 2019), we validate the approach on complex multi-domain task-oriented dialogues from the MultiWOZ dataset. Our automatic and human evaluations show that the proposed model is on par with a strong task-specific neural baseline. In the long run, our approach holds promise to mitigate the data scarcity problem, and to support the construction of more engaging and more eloquent task-oriented conversational agents.

Similar Work