Eyegpt: Ophthalmic Assistant With Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Eyegpt: Ophthalmic Assistant With Large Language Models

Chen Xiaolan, Zhao Ziwei, Zhang Weiyi, Xu Pusheng, Gao Le, Xu Mingpu, Wu Yue, Li Yinwen, Shi Danli, He Mingguang. Arxiv 2024

[Paper]    
Attention Mechanism Efficiency And Optimization GPT Model Architecture RAG Reinforcement Learning Tools

Artificial intelligence (AI) has gained significant attention in healthcare consultation due to its potential to improve clinical workflow and enhance medical communication. However, owing to the complex nature of medical information, large language models (LLM) trained with general world knowledge might not possess the capability to tackle medical-related tasks at an expert level. Here, we introduce EyeGPT, a specialized LLM designed specifically for ophthalmology, using three optimization strategies including role-playing, finetuning, and retrieval-augmented generation. In particular, we proposed a comprehensive evaluation framework that encompasses a diverse dataset, covering various subspecialties of ophthalmology, different users, and diverse inquiry intents. Moreover, we considered multiple evaluation metrics, including accuracy, understandability, trustworthiness, empathy, and the proportion of hallucinations. By assessing the performance of different EyeGPT variants, we identify the most effective one, which exhibits comparable levels of understandability, trustworthiness, and empathy to human ophthalmologists (all Ps>0.05). Overall, ur study provides valuable insights for future research, facilitating comprehensive comparisons and evaluations of different strategies for developing specialized LLMs in ophthalmology. The potential benefits include enhancing the patient experience in eye care and optimizing ophthalmologists’ services.

Similar Work