A Three-stage Learning Framework For Low-resource Knowledge-grounded Dialogue Generation · The Large Language Model Bible Contribute to LLM-Bible

A Three-stage Learning Framework For Low-resource Knowledge-grounded Dialogue Generation

Liu Shilei, Zhao Xiaofeng, Li Bochao, Ren Feiliang, Zhang Longhui, Yin Shujuan. Arxiv 2021

[Paper]    
Model Architecture Pretraining Methods Reinforcement Learning Tools Training Techniques Transformer

Neural conversation models have shown great potentials towards generating fluent and informative responses by introducing external background knowledge. Nevertheless, it is laborious to construct such knowledge-grounded dialogues, and existing models usually perform poorly when transfer to new domains with limited training samples. Therefore, building a knowledge-grounded dialogue system under the low-resource setting is a still crucial issue. In this paper, we propose a novel three-stage learning framework based on weakly supervised learning which benefits from large scale ungrounded dialogues and unstructured knowledge base. To better cooperate with this framework, we devise a variant of Transformer with decoupled decoder which facilitates the disentangled learning of response generation and knowledge incorporation. Evaluation results on two benchmarks indicate that our approach can outperform other state-of-the-art methods with less training data, and even in zero-resource scenario, our approach still performs well.

Similar Work