The Solution For The 5th GCAIAC Zero-shot Referring Expression Comprehension Challenge · The Large Language Model Bible Contribute to LLM-Bible

The Solution For The 5th GCAIAC Zero-shot Referring Expression Comprehension Challenge

Huang Longfei, Yu Feng, Guan Zhihao, Wan Zhonghua, Yang Yang. Arxiv 2024

[Paper]    
Applications Attention Mechanism Model Architecture Multimodal Models Prompting Training Techniques

This report presents a solution for the zero-shot referring expression comprehension task. Visual-language multimodal base models (such as CLIP, SAM) have gained significant attention in recent years as a cornerstone of mainstream research. One of the key applications of multimodal base models lies in their ability to generalize to zero-shot downstream tasks. Unlike traditional referring expression comprehension, zero-shot referring expression comprehension aims to apply pre-trained visual-language models directly to the task without specific training. Recent studies have enhanced the zero-shot performance of multimodal base models in referring expression comprehension tasks by introducing visual prompts. To address the zero-shot referring expression comprehension challenge, we introduced a combination of visual prompts and considered the influence of textual prompts, employing joint prediction tailored to the data characteristics. Ultimately, our approach achieved accuracy rates of 84.825 on the A leaderboard and 71.460 on the B leaderboard, securing the first position.

Similar Work