Medchatzh: A Better Medical Adviser Learns From Better Instructions · The Large Language Model Bible Contribute to LLM-Bible

Medchatzh: A Better Medical Adviser Learns From Better Instructions

Tan Yang, Li Mingchen, Huang Zijie, Yu Huiqun, Fan Guisheng. Arxiv 2023

[Paper] [Code]    
Applications Fine Tuning Has Code Pretraining Methods Reinforcement Learning Training Techniques

Generative large language models (LLMs) have shown great success in various applications, including question-answering (QA) and dialogue systems. However, in specialized domains like traditional Chinese medical QA, these models may perform unsatisfactorily without fine-tuning on domain-specific datasets. To address this, we introduce MedChatZH, a dialogue model designed specifically for traditional Chinese medical QA. Our model is pre-trained on Chinese traditional medical books and fine-tuned with a carefully curated medical instruction dataset. It outperforms several solid baselines on a real-world medical dialogue dataset. We release our model, code, and dataset on https://github.com/tyang816/MedChatZH to facilitate further research in the domain of traditional Chinese medicine and LLMs.

Similar Work