How To Bridge The Gap Between Modalities: A Comprehensive Survey On Multimodal Large Language Model · The Large Language Model Bible Contribute to LLM-Bible

How To Bridge The Gap Between Modalities: A Comprehensive Survey On Multimodal Large Language Model

Song Shezheng, Li Xiaopeng, Li Shasha, Zhao Shan, Yu Jie, Ma Jun, Mao Xiaoguang, Zhang Weimin. Arxiv 2023

[Paper]    
Fine Tuning GPT Model Architecture Multimodal Models Reinforcement Learning Survey Paper Tools

This review paper explores Multimodal Large Language Models (MLLMs), which integrate Large Language Models (LLMs) like GPT-4 to handle multimodal data such as text and vision. MLLMs demonstrate capabilities like generating image narratives and answering image-based questions, bridging the gap towards real-world human-computer interactions and hinting at a potential pathway to artificial general intelligence. However, MLLMs still face challenges in processing the semantic gap in multimodality, which may lead to erroneous generation, posing potential risks to society. Choosing the appropriate modality alignment method is crucial, as improper methods might require more parameters with limited performance improvement. This paper aims to explore modality alignment methods for LLMs and their existing capabilities. Implementing modality alignment allows LLMs to address environmental issues and enhance accessibility. The study surveys existing modal alignment methods in MLLMs into four groups: (1) Multimodal Converters that change data into something LLMs can understand; (2) Multimodal Perceivers to improve how LLMs perceive different types of data; (3) Tools Assistance for changing data into one common format, usually text; and (4) Data-Driven methods that teach LLMs to understand specific types of data in a dataset. This field is still in a phase of exploration and experimentation, and we will organize and update various existing research methods for multimodal information alignment.

Similar Work