Prompting And Evaluating Large Language Models For Proactive Dialogues: Clarification, Target-guided, And Non-collaboration · The Large Language Model Bible Contribute to LLM-Bible

Prompting And Evaluating Large Language Models For Proactive Dialogues: Clarification, Target-guided, And Non-collaboration

Deng Yang, Liao Lizi, Chen Liang, Wang Hongru, Lei Wenqiang, Chua Tat-seng. Arxiv 2023

[Paper]    
Agentic Applications GPT Model Architecture Prompting

Conversational systems based on Large Language Models (LLMs), such as ChatGPT, show exceptional proficiency in context understanding and response generation. However, despite their impressive capabilities, they still possess limitations, such as providing randomly-guessed answers to ambiguous queries or failing to refuse users’ requests, both of which are considered aspects of a conversational agent’s proactivity. This raises the question of whether LLM-based conversational systems are equipped to handle proactive dialogue problems. In this work, we conduct a comprehensive analysis of LLM-based conversational systems, specifically focusing on three aspects of proactive dialogue systems: clarification, target-guided, and non-collaborative dialogues. To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme, which augments LLMs with the goal planning capability over descriptive reasoning chains. Empirical findings are discussed to promote future studies on LLM-based proactive dialogue systems.

Similar Work