Vision-language And Large Language Model Performance In Gastroenterology: GPT, Claude, Llama, Phi, Mistral, Gemma, And Quantized Models · The Large Language Model Bible Contribute to LLM-Bible

Vision-language And Large Language Model Performance In Gastroenterology: GPT, Claude, Llama, Phi, Mistral, Gemma, And Quantized Models

Safavi-naini Seyed Amir Ahmad, Ali Shuhaib, Shahab Omer, Shahhoseini Zahra, Savage Thomas, Rafiee Sara, Samaan Jamil S, Shabeeb Reem Al, Ladak Farah, Yang Jamie O, Echavarria Juan, Babar Sumbal, Shaukat Aasma, Margolis Samuel, Tatonetti Nicholas P, Nadkarni Girish, Kurdi Bara El, Soroush Ali. Arxiv 2024

[Paper]    
Efficiency And Optimization GPT Model Architecture Multimodal Models Prompting Quantization RAG Tools

Background and Aims: This study evaluates the medical reasoning performance of large language models (LLMs) and vision language models (VLMs) in gastroenterology. Methods: We used 300 gastroenterology board exam-style multiple-choice questions, 138 of which contain images to systematically assess the impact of model configurations and parameters and prompt engineering strategies utilizing GPT-3.5. Next, we assessed the performance of proprietary and open-source LLMs (versions), including GPT (3.5, 4, 4o, 4omini), Claude (3, 3.5), Gemini (1.0), Mistral, Llama (2, 3, 3.1), Mixtral, and Phi (3), across different interfaces (web and API), computing environments (cloud and local), and model precisions (with and without quantization). Finally, we assessed accuracy using a semiautomated pipeline. Results: Among the proprietary models, GPT-4o (73.7%) and Claude3.5-Sonnet (74.0%) achieved the highest accuracy, outperforming the top open-source models: Llama3.1-405b (64%), Llama3.1-70b (58.3%), and Mixtral-8x7b (54.3%). Among the quantized open-source models, the 6-bit quantized Phi3-14b (48.7%) performed best. The scores of the quantized models were comparable to those of the full-precision models Llama2-7b, Llama2–13b, and Gemma2-9b. Notably, VLM performance on image-containing questions did not improve when the images were provided and worsened when LLM-generated captions were provided. In contrast, a 10% increase in accuracy was observed when images were accompanied by human-crafted image descriptions. Conclusion: In conclusion, while LLMs exhibit robust zero-shot performance in medical reasoning, the integration of visual data remains a challenge for VLMs. Effective deployment involves carefully determining optimal model configurations, encouraging users to consider either the high performance of proprietary models or the flexible adaptability of open-source models.

Similar Work