Medadapter: Efficient Test-time Adaptation Of Large Language Models Towards Medical Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Medadapter: Efficient Test-time Adaptation Of Large Language Models Towards Medical Reasoning

Shi Wenqi, Xu Ran, Zhuang Yuchen, Yu Yue, Wu Hang, Yang Carl, Wang May D.. Arxiv 2024

[Paper]    
Applications BERT Fine Tuning Model Architecture Pretraining Methods RAG Training Techniques

Despite their improved capabilities in generation and reasoning, adapting large language models (LLMs) to the biomedical domain remains challenging due to their immense size and corporate privacy. In this work, we propose MedAdapter, a unified post-hoc adapter for test-time adaptation of LLMs towards biomedical applications. Instead of fine-tuning the entire LLM, MedAdapter effectively adapts the original model by fine-tuning only a small BERT-sized adapter to rank candidate solutions generated by LLMs. Experiments demonstrate that MedAdapter effectively adapts both white-box and black-box LLMs in biomedical reasoning, achieving average performance improvements of 25.48% and 11.31%, respectively, without requiring extensive computational resources or sharing data with third parties. MedAdapter also yields superior performance when combined with train-time adaptation, highlighting a flexible and complementary solution to existing adaptation methods. Faced with the challenges of balancing model performance, computational resources, and data privacy, MedAdapter provides an efficient, privacy-preserving, cost-effective, and transparent solution for adapting LLMs to the biomedical domain.

Similar Work