Eventlens: Leveraging Event-aware Pretraining And Cross-modal Linking Enhances Visual Commonsense Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Eventlens: Leveraging Event-aware Pretraining And Cross-modal Linking Enhances Visual Commonsense Reasoning

Ma Mingjie, Yu Zhihuan, Ma Yichao, Li Guohui. Arxiv 2024

[Paper]    
Fine Tuning Multimodal Models Pretraining Methods Prompting RAG Training Techniques

Visual Commonsense Reasoning (VCR) is a cognitive task, challenging models to answer visual questions requiring human commonsense, and to provide rationales explaining why the answers are correct. With emergence of Large Language Models (LLMs), it is natural and imperative to explore their applicability to VCR. However, VCR task demands more external knowledge to tackle its challenging questions, necessitating special designs to activate LLMs’ commonsense reasoning abilities. Also, most existing Multimodal LLMs adopted an abstraction of entire input image, which makes it difficult to comprehend VCR’s unique co-reference tags between image regions and text, posing challenges for fine-grained alignment. To address these issues, we propose EventLens that leverages Event-Aware Pretraining and Cross-modal Linking and EnhanceS VCR. First, by emulating the cognitive process of human reasoning, an Event-Aware Pretraining auxiliary task is introduced to better activate LLM’s global comprehension of intricate scenarios. Second, during fine-tuning, we further utilize reference tags to bridge RoI features with texts, while preserving both modality semantics. Finally, we use instruct-style prompts to narrow the gap between pretraining and fine-tuning, and task-specific adapters to better integrate LLM’s inherent knowledge with new commonsense. Experimental results show the effectiveness of our proposed auxiliary task and fine-grained linking strategy.

Similar Work