An Assessment On Comprehending Mental Health Through Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

An Assessment On Comprehending Mental Health Through Large Language Models

Arcan Mihael, Niland David-paul, Delahunty Fionn. Arxiv 2024

[Paper]    
Applications BERT GPT Model Architecture Pretraining Methods Transformer

Mental health challenges pose considerable global burdens on individuals and communities. Recent data indicates that more than 20% of adults may encounter at least one mental disorder in their lifetime. On the one hand, the advancements in large language models have facilitated diverse applications, yet a significant research gap persists in understanding and enhancing the potential of large language models within the domain of mental health. On the other hand, across various applications, an outstanding question involves the capacity of large language models to comprehend expressions of human mental health conditions in natural language. This study presents an initial evaluation of large language models in addressing this gap. Due to this, we compare the performance of Llama-2 and ChatGPT with classical Machine as well as Deep learning models. Our results on the DAIC-WOZ dataset show that transformer-based models, like BERT or XLNet, outperform the large language models.

Similar Work