Medaide: Leveraging Large Language Models For On-premise Medical Assistance On Edge Devices · The Large Language Model Bible Contribute to LLM-Bible

Medaide: Leveraging Large Language Models For On-premise Medical Assistance On Edge Devices

Basit Abdul, Hussain Khizar, Hanif Muhammad Abdullah, Shafique Muhammad. Arxiv 2024

[Paper]    
Agentic Efficiency And Optimization Fine Tuning RAG Reinforcement Learning Tools Training Techniques

Large language models (LLMs) are revolutionizing various domains with their remarkable natural language processing (NLP) abilities. However, deploying LLMs in resource-constrained edge computing and embedded systems presents significant challenges. Another challenge lies in delivering medical assistance in remote areas with limited healthcare facilities and infrastructure. To address this, we introduce MedAide, an on-premise healthcare chatbot. It leverages tiny-LLMs integrated with LangChain, providing efficient edge-based preliminary medical diagnostics and support. MedAide employs model optimizations for minimal memory footprint and latency on embedded edge devices without server infrastructure. The training process is optimized using low-rank adaptation (LoRA). Additionally, the model is trained on diverse medical datasets, employing reinforcement learning from human feedback (RLHF) to enhance its domain-specific capabilities. The system is implemented on various consumer GPUs and Nvidia Jetson development board. MedAide achieves 77% accuracy in medical consultations and scores 56 in USMLE benchmark, enabling an energy-efficient healthcare assistance platform that alleviates privacy concerns due to edge-based deployment, thereby empowering the community.

Similar Work