Low-resource Knowledge-grounded Dialogue Generation · The Large Language Model Bible Contribute to LLM-Bible

Low-resource Knowledge-grounded Dialogue Generation

Zhao Xueliang, Wu Wei, Tao Chongyang, Xu Can, Zhao Dongyan, Yan Rui. Arxiv 2020

[Paper]    
Agentic Training Techniques

Responding with knowledge has been recognized as an important capability for an intelligent conversational agent. Yet knowledge-grounded dialogues, as training data for learning such a response generation model, are difficult to obtain. Motivated by the challenge in practice, we consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available. In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model. By this means, the major part of the model can be learned from a large number of ungrounded dialogues and unstructured documents, while the remaining small parameters can be well fitted using the limited training examples. Evaluation results on two benchmarks indicate that with only 1/8 training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.

Similar Work