Understanding The Impact Of Long-term Memory On Self-disclosure With Large Language Model-driven Chatbots For Public Health Intervention · The Large Language Model Bible Contribute to LLM-Bible

Understanding The Impact Of Long-term Memory On Self-disclosure With Large Language Model-driven Chatbots For Public Health Intervention

Jo Eunkyung, Jeong Yuin, Park Sohyun, Epstein Daniel A., Kim Young-ho. In Proceedings of the CHI Conference on Human Factors in Computing Systems 2024

[Paper]    
Reinforcement Learning Uncategorized

Recent large language models (LLMs) offer the potential to support public health monitoring by facilitating health disclosure through open-ended conversations but rarely preserve the knowledge gained about individuals across repeated interactions. Augmenting LLMs with long-term memory (LTM) presents an opportunity to improve engagement and self-disclosure, but we lack an understanding of how LTM impacts people’s interaction with LLM-driven chatbots in public health interventions. We examine the case of CareCall – an LLM-driven voice chatbot with LTM – through the analysis of 1,252 call logs and interviews with nine users. We found that LTM enhanced health disclosure and fostered positive perceptions of the chatbot by offering familiarity. However, we also observed challenges in promoting self-disclosure through LTM, particularly around addressing chronic health conditions and privacy concerns. We discuss considerations for LTM integration in LLM-driven chatbots for public health monitoring, including carefully deciding what topics need to be remembered in light of public health goals.

Similar Work