Towards Vision Enhancing Llms: Empowering Multimodal Knowledge Storage And Sharing In Llms · The Large Language Model Bible Contribute to LLM-Bible

Towards Vision Enhancing Llms: Empowering Multimodal Knowledge Storage And Sharing In Llms

Li Yunxin, Hu Baotian, Wang Wei, Cao Xiaochun, Zhang Min. Arxiv 2023

[Paper]    
Applications GPT Language Modeling Model Architecture Multimodal Models RAG Reinforcement Learning

Recent advancements in multimodal large language models (MLLMs) have achieved significant multimodal generation capabilities, akin to GPT-4. These models predominantly map visual information into language representation space, leveraging the vast knowledge and powerful text generation abilities of LLMs to produce multimodal instruction-following responses. We could term this method as LLMs for Vision because of its employing LLMs for visual-language understanding, yet observe that these MLLMs neglect the potential of harnessing visual knowledge to enhance overall capabilities of LLMs, which could be regraded as Vision Enhancing LLMs. In this paper, we propose an approach called MKS2, aimed at enhancing LLMs through empowering Multimodal Knowledge Storage and Sharing in LLMs. Specifically, we introduce the Modular Visual Memory, a component integrated into the internal blocks of LLMs, designed to store open-world visual information efficiently. Additionally, we present a soft Mixtures-of-Multimodal Experts architecture in LLMs to invoke multimodal knowledge collaboration during generation. Our comprehensive experiments demonstrate that MKS2 substantially augments the reasoning capabilities of LLMs in contexts necessitating physical or commonsense knowledge. It also delivers competitive results on multimodal benchmarks.

Similar Work