Fine-tuning BERT For Schema-guided Zero-shot Dialogue State Tracking · The Large Language Model Bible Contribute to LLM-Bible

Fine-tuning BERT For Schema-guided Zero-shot Dialogue State Tracking

Ruan Yu-ping, Ling Zhen-hua, Gu Jia-chen, Liu Quan. Arxiv 2020

[Paper]    
Applications BERT Fine Tuning Model Architecture Pretraining Methods Tools Training Techniques

We present our work on Track 4 in the Dialogue System Technology Challenges 8 (DSTC8). The DSTC8-Track 4 aims to perform dialogue state tracking (DST) under the zero-shot settings, in which the model needs to generalize on unseen service APIs given a schema definition of these target APIs. Serving as the core for many virtual assistants such as Siri, Alexa, and Google Assistant, the DST keeps track of the user’s goal and what happened in the dialogue history, mainly including intent prediction, slot filling, and user state tracking, which tests models’ ability of natural language understanding. Recently, the pretrained language models have achieved state-of-the-art results and shown impressive generalization ability on various NLP tasks, which provide a promising way to perform zero-shot learning for language understanding. Based on this, we propose a schema-guided paradigm for zero-shot dialogue state tracking (SGP-DST) by fine-tuning BERT, one of the most popular pretrained language models. The SGP-DST system contains four modules for intent prediction, slot prediction, slot transfer prediction, and user state summarizing respectively. According to the official evaluation results, our SGP-DST (team12) ranked 3rd on the joint goal accuracy (primary evaluation metric for ranking submissions) and 1st on the requsted slots F1 among 25 participant teams.

Similar Work