Jmedlora:medical Domain Adaptation On Japanese Large Language Models Using Instruction-tuning · The Large Language Model Bible Contribute to LLM-Bible

Jmedlora:medical Domain Adaptation On Japanese Large Language Models Using Instruction-tuning

Sukeda Issey, Suzuki Masahiro, Sakaji Hiroki, Kodera Satoshi. Arxiv 2023

[Paper]    
Applications Fine Tuning GPT Model Architecture

In the ongoing wave of impact driven by large language models (LLMs) like ChatGPT, the adaptation of LLMs to medical domain has emerged as a crucial research frontier. Since mainstream LLMs tend to be designed for general-purpose applications, constructing a medical LLM through domain adaptation is a huge challenge. While instruction-tuning is used to fine-tune some LLMs, its precise roles in domain adaptation remain unknown. Here we show the contribution of LoRA-based instruction-tuning to performance in Japanese medical question-answering tasks. In doing so, we employ a multifaceted evaluation for multiple-choice questions, including scoring based on “Exact match” and “Gestalt distance” in addition to the conventional accuracy. Our findings suggest that LoRA-based instruction-tuning can partially incorporate domain-specific knowledge into LLMs, with larger models demonstrating more pronounced effects. Furthermore, our results underscore the potential of adapting English-centric models for Japanese applications in domain adaptation, while also highlighting the persisting limitations of Japanese-centric models. This initiative represents a pioneering effort in enabling medical institutions to fine-tune and operate models without relying on external services.

Similar Work