Do Vision-language Transformers Exhibit Visual Commonsense? An Empirical Study Of VCR · The Large Language Model Bible Contribute to LLM-Bible

Do Vision-language Transformers Exhibit Visual Commonsense? An Empirical Study Of VCR

Li Zhenyang, Guo Yangyang, Wang Kejie, Chen Xiaolin, Nie Liqiang, Kankanhalli Mohan. Arxiv 2024

[Paper]    
Applications Ethics And Bias Model Architecture Multimodal Models Pretraining Methods Training Techniques Transformer

Visual Commonsense Reasoning (VCR) calls for explanatory reasoning behind question answering over visual scenes. To achieve this goal, a model is required to provide an acceptable rationale as the reason for the predicted answers. Progress on the benchmark dataset stems largely from the recent advancement of Vision-Language Transformers (VL Transformers). These models are first pre-trained on some generic large-scale vision-text datasets, and then the learned representations are transferred to the downstream VCR task. Despite their attractive performance, this paper posits that the VL Transformers do not exhibit visual commonsense, which is the key to VCR. In particular, our empirical results pinpoint several shortcomings of existing VL Transformers: small gains from pre-training, unexpected language bias, limited model architecture for the two inseparable sub-tasks, and neglect of the important object-tag correlation. With these findings, we tentatively suggest some future directions from the aspect of dataset, evaluation metric, and training tricks. We believe this work could make researchers revisit the intuition and goals of VCR, and thus help tackle the remaining challenges in visual reasoning.

Similar Work