Chain-of-interaction: Enhancing Large Language Models For Psychiatric Behavior Understanding By Dyadic Contexts · The Large Language Model Bible Contribute to LLM-Bible

Chain-of-interaction: Enhancing Large Language Models For Psychiatric Behavior Understanding By Dyadic Contexts

Han Guangzeng, Liu Weisi, Huang Xiaolei, Borsari Brian. Arxiv 2024

[Paper]    
Prompting RAG Reinforcement Learning Tools

Automatic coding patient behaviors is essential to support decision making for psychotherapists during the motivational interviewing (MI), a collaborative communication intervention approach to address psychiatric issues, such as alcohol and drug addiction. While the behavior coding task has rapidly adapted machine learning to predict patient states during the MI sessions, lacking of domain-specific knowledge and overlooking patient-therapist interactions are major challenges in developing and deploying those models in real practice. To encounter those challenges, we introduce the Chain-of-Interaction (CoI) prompting method aiming to contextualize large language models (LLMs) for psychiatric decision support by the dyadic interactions. The CoI prompting approach systematically breaks down the coding task into three key reasoning steps, extract patient engagement, learn therapist question strategies, and integrates dyadic interactions between patients and therapists. This approach enables large language models to leverage the coding scheme, patient state, and domain knowledge for patient behavioral coding. Experiments on real-world datasets can prove the effectiveness and flexibility of our prompting method with multiple state-of-the-art LLMs over existing prompting baselines. We have conducted extensive ablation analysis and demonstrate the critical role of dyadic interactions in applying LLMs for psychotherapy behavior understanding.

Similar Work