A Preliminary Evaluation Of Chatgpt For Zero-shot Dialogue Understanding · The Large Language Model Bible Contribute to LLM-Bible

A Preliminary Evaluation Of Chatgpt For Zero-shot Dialogue Understanding

Pan Wenbo, Chen Qiguang, Xu Xiao, Che Wanxiang, Qin Libo. Arxiv 2023

[Paper]    
Attention Mechanism GPT Model Architecture Prompting Training Techniques

Zero-shot dialogue understanding aims to enable dialogue to track the user’s needs without any training data, which has gained increasing attention. In this work, we investigate the understanding ability of ChatGPT for zero-shot dialogue understanding tasks including spoken language understanding (SLU) and dialogue state tracking (DST). Experimental results on four popular benchmarks reveal the great potential of ChatGPT for zero-shot dialogue understanding. In addition, extensive analysis shows that ChatGPT benefits from the multi-turn interactive prompt in the DST task but struggles to perform slot filling for SLU. Finally, we summarize several unexpected behaviors of ChatGPT in dialogue understanding tasks, hoping to provide some insights for future research on building zero-shot dialogue understanding systems with Large Language Models (LLMs).

Similar Work