Self-training Improves Pre-training For Few-shot Learning In Task-oriented Dialog Systems · The Large Language Model Bible Contribute to LLM-Bible

Self-training Improves Pre-training For Few-shot Learning In Task-oriented Dialog Systems

Mi Fei, Zhou Wanhao, Cai Fengyu, Kong Lingjing, Huang Minlie, Faltings Boi. Arxiv 2021

[Paper]    
BERT Few Shot Masked Language Model Model Architecture Pretraining Methods Training Techniques

As the labeling cost for different modules in task-oriented dialog (ToD) systems is expensive, a major challenge is to train different modules with the least amount of labeled data. Recently, large-scale pre-trained language models, have shown promising results for few-shot learning in ToD. In this paper, we devise a self-training approach to utilize the abundant unlabeled dialog data to further improve state-of-the-art pre-trained models in few-shot learning scenarios for ToD systems. Specifically, we propose a self-training approach that iteratively labels the most confident unlabeled data to train a stronger Student model. Moreover, a new text augmentation technique (GradAug) is proposed to better train the Student by replacing non-crucial tokens using a masked language model. We conduct extensive experiments and present analyses on four downstream tasks in ToD, including intent classification, dialog state tracking, dialog act prediction, and response selection. Empirical results demonstrate that the proposed self-training approach consistently improves state-of-the-art pre-trained models (BERT, ToD-BERT) when only a small number of labeled data are available.

Similar Work