From Redundancy To Relevance: Enhancing Explainability In Multimodal Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

From Redundancy To Relevance: Enhancing Explainability In Multimodal Large Language Models

Zhang Xiaofeng, Shen Chen, Yuan Xiaosong, Yan Shaotian, Xie Liang, Wang Wenxiao, Gu Chaochen, Tang Hao, Ye Jieping. Arxiv 2024

[Paper]    
Interpretability And Explainability Multimodal Models Prompting

Recently, multimodal large language models have exploded with an endless variety, most of the popular Large Vision Language Models (LVLMs) depend on sequential visual representation, where images are converted into hundreds or thousands of tokens before being input into the Large Language Model (LLM) along with language prompts. The black-box design hinders the interpretability of visual-language models, especially regarding more complex reasoning tasks. To explore the interaction process between image and text in complex reasoning tasks, we introduce the information flow method to visualize the interaction mechanism. By analyzing the dynamic flow of the information flow, we find that the information flow appears to converge in the shallow layer. Further investigation revealed a redundancy of the image token in the shallow layer. Consequently, a truncation strategy was introduced to aggregate image tokens within these shallow layers. This approach has been validated through experiments across multiple models, yielding consistent improvements.

Similar Work