Disc-medllm: Bridging General Large Language Models And Real-world Medical Consultation · The Large Language Model Bible Contribute to LLM-Bible

Disc-medllm: Bridging General Large Language Models And Real-world Medical Consultation

Bao Zhijie, Chen Wei, Xiao Shengze, Ren Kuang, Wu Jiaao, Zhong Cheng, Peng Jiajie, Huang Xuanjing, Wei Zhongyu. Arxiv 2023

[Paper] [Code]    
Fine Tuning Has Code Pretraining Methods RAG Reinforcement Learning Training Techniques

We propose DISC-MedLLM, a comprehensive solution that leverages Large Language Models (LLMs) to provide accurate and truthful medical response in end-to-end conversational healthcare services. To construct high-quality Supervised Fine-Tuning (SFT) datasets, we employ three strategies: utilizing medical knowledge-graphs, reconstructing real-world dialogues, and incorporating human-guided preference rephrasing. These datasets are instrumental in training DISC-MedLLM, surpassing existing medical LLMs in both single-turn and multi-turn consultation scenarios. Extensive experimental results demonstrate the effectiveness of the proposed model in bridging the gap between general language models and real-world medical consultation. Additionally, we release the constructed dataset and model weights to further contribute to research and development. Further details and resources can be found at https://github.com/FudanDISC/DISC-MedLLM

Similar Work