Multi-stage Pre-training Enhanced By Chatgpt For Multi-scenario Multi-domain Dialogue Summarization · The Large Language Model Bible Contribute to LLM-Bible

Multi-stage Pre-training Enhanced By Chatgpt For Multi-scenario Multi-domain Dialogue Summarization

Zhou Weixiao, Li Gengyao, Cheng Xianfu, Liang Xinnian, Zhu Junnan, Zhai Feifei, Li Zhoujun. Arxiv 2023

[Paper]    
Applications Few Shot Fine Tuning GPT Model Architecture Pretraining Methods Training Techniques

Dialogue summarization involves a wide range of scenarios and domains. However, existing methods generally only apply to specific scenarios or domains. In this study, we propose a new pre-trained model specifically designed for multi-scenario multi-domain dialogue summarization. It adopts a multi-stage pre-training strategy to reduce the gap between the pre-training objective and fine-tuning objective. Specifically, we first conduct domain-aware pre-training using large-scale multi-scenario multi-domain dialogue data to enhance the adaptability of our pre-trained model. Then, we conduct task-oriented pre-training using large-scale multi-scenario multi-domain “dialogue-summary” parallel data annotated by ChatGPT to enhance the dialogue summarization ability of our pre-trained model. Experimental results on three dialogue summarization datasets from different scenarios and domains indicate that our pre-trained model significantly outperforms previous state-of-the-art models in full fine-tuning, zero-shot, and few-shot settings.

Similar Work