Sentiment Reasoning For Healthcare · The Large Language Model Bible Contribute to LLM-Bible

Sentiment Reasoning For Healthcare

Le-duc Khai, Nguyen Khai-nguyen, Tat Bach Phan, Le Duy, Ngo Jerry, Vo-dang Long, Nguyen Anh Totti, Hy Truong-son. Arxiv 2024

[Paper] [Code]    
Ethics And Bias Has Code Multimodal Models Reinforcement Learning Tools Training Techniques

Transparency in AI decision-making is crucial in healthcare due to the severe consequences of errors, and this is important for building trust among AI and users in sentiment analysis task. Incorporating reasoning capabilities helps Large Language Models (LLMs) understand human emotions within broader contexts, handle nuanced and ambiguous language, and infer underlying sentiments that may not be explicitly stated. In this work, we introduce a new task - Sentiment Reasoning - for both speech and text modalities, along with our proposed multimodal multitask framework and dataset. Our study showed that rationale-augmented training enhances model performance in sentiment classification across both human transcript and ASR settings. Also, we found that the generated rationales typically exhibit different vocabularies compared to human-generated rationales, but maintain similar semantics. All code, data (English-translated and Vietnamese) and models are published online: https://github.com/leduckhai/MultiMed

Similar Work