Targeted Visual Prompting For Medical Visual Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Targeted Visual Prompting For Medical Visual Question Answering

Tascon-morales Sergio, Márquez-neila Pablo, Sznitman Raphael. Arxiv 2024

[Paper] [Code]    
Applications Has Code Merging Model Architecture Multimodal Models Prompting Tools

With growing interest in recent years, medical visual question answering (Med-VQA) has rapidly evolved, with multimodal large language models (MLLMs) emerging as an alternative to classical model architectures. Specifically, their ability to add visual information to the input of pre-trained LLMs brings new capabilities for image interpretation. However, simple visual errors cast doubt on the actual visual understanding abilities of these models. To address this, region-based questions have been proposed as a means to assess and enhance actual visual understanding through compositional evaluation. To combine these two perspectives, this paper introduces targeted visual prompting to equip MLLMs with region-based questioning capabilities. By presenting the model with both the isolated region and the region in its context in a customized visual prompt, we show the effectiveness of our method across multiple datasets while comparing it to several baseline models. Our code and data are available at https://github.com/sergiotasconmorales/locvqallm.

Similar Work