Vl-interpret: An Interactive Visualization Tool For Interpreting Vision-language Transformers · The Large Language Model Bible Contribute to LLM-Bible

Vl-interpret: An Interactive Visualization Tool For Interpreting Vision-language Transformers

Aflalo Estelle, Du Meng, Tseng Shao-yen, Liu Yongfei, Wu Chenfei, Duan Nan, Lal Vasudev. Arxiv 2022

[Paper]    
Applications Attention Mechanism Interpretability And Explainability Model Architecture Multimodal Models Pretraining Methods Tools Training Techniques Transformer

Breakthroughs in transformer-based models have revolutionized not only the NLP field, but also vision and multimodal systems. However, although visualization and interpretability tools have become available for NLP models, internal mechanisms of vision and multimodal transformers remain largely opaque. With the success of these transformers, it is increasingly critical to understand their inner workings, as unraveling these black-boxes will lead to more capable and trustworthy models. To contribute to this quest, we propose VL-InterpreT, which provides novel interactive visualizations for interpreting the attentions and hidden representations in multimodal transformers. VL-InterpreT is a task agnostic and integrated tool that (1) tracks a variety of statistics in attention heads throughout all layers for both vision and language components, (2) visualizes cross-modal and intra-modal attentions through easily readable heatmaps, and (3) plots the hidden representations of vision and language tokens as they pass through the transformer layers. In this paper, we demonstrate the functionalities of VL-InterpreT through the analysis of KD-VLP, an end-to-end pretraining vision-language multimodal transformer-based model, in the tasks of Visual Commonsense Reasoning (VCR) and WebQA, two visual question answering benchmarks. Furthermore, we also present a few interesting findings about multimodal transformer behaviors that were learned through our tool.

Similar Work