Self-explanation Prompting Improves Dialogue Understanding In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Self-explanation Prompting Improves Dialogue Understanding In Large Language Models

Gao Haoyu, Lin Ting-en, Li Hangyu, Yang Min, Wu Yuchuan, Ma Wentao, Li Yongbin. Arxiv 2023

[Paper]    
Few Shot Interpretability And Explainability Prompting

Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but Large Language Models (LLMs) often struggle to comprehend these intricate contexts. In this study, we propose a novel “Self-Explanation” prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs’ comprehension in complex dialogue tasks.

Similar Work