A Knowledge-grounded Dialog System Based On Pre-trained Language Models · The Large Language Model Bible Contribute to LLM-Bible

A Knowledge-grounded Dialog System Based On Pre-trained Language Models

Zhang Weijie, Chen Jiaoxuan, Wu Haipang, Wan Sanhui, Li Gongfeng. Arxiv 2021

[Paper]    
Efficiency And Optimization Fine Tuning Model Architecture Pretraining Methods RAG Tools Transformer

We present a knowledge-grounded dialog system developed for the ninth Dialog System Technology Challenge (DSTC9) Track 1 - Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access. We leverage transfer learning with existing language models to accomplish the tasks in this challenge track. Specifically, we divided the task into four sub-tasks and fine-tuned several Transformer models on each of the sub-tasks. We made additional changes that yielded gains in both performance and efficiency, including the combination of the model with traditional entity-matching techniques, and the addition of a pointer network to the output layer of the language model.

Similar Work