Can Large Language Models Be Good Emotional Supporter? Mitigating Preference Bias On Emotional Support Conversation · The Large Language Model Bible Contribute to LLM-Bible

Can Large Language Models Be Good Emotional Supporter? Mitigating Preference Bias On Emotional Support Conversation

Kang Dongjin, Kim Sunghwan, Kwon Taeyoon, Moon Seungjun, Cho Hyunsouk, Yu Youngjae, Lee Dongha, Yeo Jinyoung. Arxiv 2024

[Paper]    
Ethics And Bias Reinforcement Learning Security

Emotional Support Conversation (ESC) is a task aimed at alleviating individuals’ emotional distress through daily conversation. Given its inherent complexity and non-intuitive nature, ESConv dataset incorporates support strategies to facilitate the generation of appropriate responses. Recently, despite the remarkable conversational ability of large language models (LLMs), previous studies have suggested that they often struggle with providing useful emotional support. Hence, this work initially analyzes the results of LLMs on ESConv, revealing challenges in selecting the correct strategy and a notable preference for a specific strategy. Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy. Moreover, we conduct a methodological study to offer insights into the necessary approaches for LLMs to serve as proficient emotional supporters. Our findings emphasize that (1) low preference for specific strategies hinders the progress of emotional support, (2) external assistance helps reduce preference bias, and (3) existing LLMs alone cannot become good emotional supporters. These insights suggest promising avenues for future research to enhance the emotional intelligence of LLMs.

Similar Work