Multi-modal Instruction Tuned Llms With Fine-grained Visual Perception · The Large Language Model Bible Contribute to LLM-Bible

Multi-modal Instruction Tuned Llms With Fine-grained Visual Perception

He Junwen, Wang Yifan, Wang Lijun, Lu Huchuan, He Jun-yan, Lan Jin-peng, Luo Bin, Xie Xuansong. Arxiv 2024

[Paper]    
Attention Mechanism Model Architecture Multimodal Models Prompting RAG Tools Training Techniques

Multimodal Large Language Model (MLLMs) leverages Large Language Models as a cognitive framework for diverse visual-language tasks. Recent efforts have been made to equip MLLMs with visual perceiving and grounding capabilities. However, there still remains a gap in providing fine-grained pixel-level perceptions and extending interactions beyond text-specific inputs. In this work, we propose {\bf{AnyRef}}, a general MLLM model that can generate pixel-wise object perceptions and natural language descriptions from multi-modality references, such as texts, boxes, images, or audio. This innovation empowers users with greater flexibility to engage with the model beyond textual and regional prompts, without modality-specific designs. Through our proposed refocusing mechanism, the generated grounding output is guided to better focus on the referenced object, implicitly incorporating additional pixel-level supervision. This simple modification utilizes attention scores generated during the inference of LLM, eliminating the need for extra computations while exhibiting performance enhancements in both grounding masks and referring expressions. With only publicly available training data, our model achieves state-of-the-art results across multiple benchmarks, including diverse modality referring segmentation and region-level referring expression generation.

Similar Work