Steering Conversational Large Language Models For Long Emotional Support Conversations · The Large Language Model Bible Contribute to LLM-Bible

Steering Conversational Large Language Models For Long Emotional Support Conversations

Madani Navid, Saha Sougata, Srihari Rohini. Arxiv 2024

[Paper]    
Attention Mechanism Model Architecture Prompting Reinforcement Learning Uncategorized

In this study, we address the challenge of consistently following emotional support strategies in long conversations by large language models (LLMs). We introduce the Strategy-Relevant Attention (SRA) metric, a model-agnostic measure designed to evaluate the effectiveness of LLMs in adhering to strategic prompts in emotional support contexts. By analyzing conversations within the Emotional Support Conversations dataset (ESConv) using LLaMA models, we demonstrate that SRA is significantly correlated with a model’s ability to sustain the outlined strategy throughout the interactions. Our findings reveal that the application of SRA-informed prompts leads to enhanced strategic adherence, resulting in conversations that more reliably exhibit the desired emotional support strategies over longer conversations. Furthermore, we contribute a comprehensive, multi-branch synthetic conversation dataset for ESConv, featuring a variety of strategy continuations informed by our optimized prompting method. The code and data are publicly available on our Github.

Similar Work