Adapting Open-source Large Language Models For Cost-effective, Expert-level Clinical Note Generation With On-policy Reinforcement Learning · The Large Language Model Bible Contribute to LLM-Bible

Adapting Open-source Large Language Models For Cost-effective, Expert-level Clinical Note Generation With On-policy Reinforcement Learning

Wang Hanyin, Gao Chufan, Liu Bolun, Xu Qiping, Hussein Guleid, Labban Mohamad El, Iheasirim Kingsley, Korsapati Hariprasad, Outcalt Chuck, Sun Jimeng. Arxiv 2024

[Paper]    
Agentic Applications Fine Tuning GPT Model Architecture Pretraining Methods Reinforcement Learning Training Techniques

Proprietary Large Language Models (LLMs) such as GPT-4 and Gemini have demonstrated promising capabilities in clinical text summarization tasks. However, due to patient data privacy concerns and computational costs, many healthcare providers prefer using small, locally-hosted models over external generic LLMs. This study presents a comprehensive domain- and task-specific adaptation process for the open-source LLaMA-2 13 billion parameter model, enabling it to generate high-quality clinical notes from outpatient patient-doctor dialogues. Our process incorporates continued pre-training, supervised fine-tuning, and reinforcement learning from both AI and human feedback. We introduced a new approach, DistillDirect, for performing on-policy reinforcement learning with Gemini 1.0 Pro as the teacher model. Our resulting model, LLaMA-Clinic, can generate clinical notes comparable in quality to those authored by physicians. In a blinded physician reader study, the majority (90.4%) of individual evaluations rated the notes generated by LLaMA-Clinic as “acceptable” or higher across all three criteria: real-world readiness, completeness, and accuracy. In the more challenging “Assessment and Plan” section, LLaMA-Clinic scored higher (4.2/5) in real-world readiness than physician-authored notes (4.1/5). Our cost analysis for inference shows that our LLaMA-Clinic model achieves a 3.75-fold cost reduction compared to an external generic LLM service. Additionally, we highlight key considerations for future clinical note-generation tasks, emphasizing the importance of pre-defining a best-practice note format, rather than relying on LLMs to determine this for clinical practice. We have made our newly created synthetic clinic dialogue-note dataset and the physician feedback dataset publicly available to foster future research.

Similar Work