Cue-cot: Chain-of-thought Prompting For Responding To In-depth Dialogue Questions With Llms · The Large Language Model Bible Contribute to LLM-Bible

Cue-cot: Chain-of-thought Prompting For Responding To In-depth Dialogue Questions With Llms

Wang Hongru, Wang Rui, Mi Fei, Deng Yang, Wang Zezhong, Liang Bin, Xu Ruifeng, Wong Kam-fai. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Prompting Reinforcement Learning

Large Language Models (LLMs), such as \texttt{ChatGPT}, greatly empower dialogue systems with strong language understanding and generation capabilities. However, most of the previous works prompt the LLMs to directly generate a response based on the dialogue context, overlooking the underlying linguistic cues about the user status exhibited in the context. Such in-depth dialogue scenarios are challenging for existing LLMs to figure out the user’s hidden needs and respond satisfactorily through a single-step inference. To this end, we propose a novel linguistic cue-based chain-of-thoughts (\textit{Cue}-CoT), which enhances the LLMs inference with an intermediate reasoning step to find cues exhibited in the dialogue, aiming to provide a more personalized and engaging response. To evaluate the approach, we build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English, targeting 3 major linguistic cues during the conversation: \textit{personality}, \textit{emotion}, and \textit{psychology}. We conduct extensive experiments on the proposed benchmark with 5 LLMs under both zero-shot and one-shot settings. Empirical results demonstrate our proposed \textit{Cue}-CoT method outperforms standard prompting methods in terms of both \textit{helpfulness} and \textit{acceptability} on all datasets.

Similar Work