Wanglab At MEDIQA-M3G 2024: Multimodal Medical Answer Generation Using Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Wanglab At MEDIQA-M3G 2024: Multimodal Medical Answer Generation Using Large Language Models

Xie Ronald, Palayew Steven, Toma Augustin, Bader Gary, Wang Bo. Arxiv 2024

[Paper]    
Applications Multimodal Models Tools Training Techniques

This paper outlines our submission to the MEDIQA2024 Multilingual and Multimodal Medical Answer Generation (M3G) shared task. We report results for two standalone solutions under the English category of the task, the first involving two consecutive API calls to the Claude 3 Opus API and the second involving training an image-disease label joint embedding in the style of CLIP for image classification. These two solutions scored 1st and 2nd place respectively on the competition leaderboard, substantially outperforming the next best solution. Additionally, we discuss insights gained from post-competition experiments. While the performance of these two solutions have significant room for improvement due to the difficulty of the shared task and the challenging nature of medical visual question answering in general, we identify the multi-stage LLM approach and the CLIP image classification approach as promising avenues for further investigation.

Similar Work