Domain-specific Improvement On Psychotherapy Chatbot Using Assistant · The Large Language Model Bible Contribute to LLM-Bible

Domain-specific Improvement On Psychotherapy Chatbot Using Assistant

Kang Cheng, Novak Daniel, Urbanova Katerina, Cheng Yuqing, Hu Yong. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Training Techniques

Large language models (LLMs) have demonstrated impressive generalization capabilities on specific tasks with human-written instruction data. However, the limited quantity, diversity, and professional expertise of such instruction data raise concerns about the performance of LLMs in psychotherapy tasks when provided with domain-specific instructions. To address this, we firstly propose Domain-Specific Assistant Instructions based on AlexanderStreet therapy, and secondly, we use an adaption fine-tuning method and retrieval augmented generation method to improve pre-trained LLMs. Through quantitative evaluation of linguistic quality using automatic and human evaluation, we observe that pre-trained LLMs on Psychotherapy Assistant Instructions outperform state-of-the-art LLMs response baselines. Our Assistant-Instruction approach offers a half-annotation method to align pre-trained LLMs with instructions and provide pre-trained LLMs with more psychotherapy knowledge.

Similar Work