Model Composition For Multimodal Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Model Composition For Multimodal Large Language Models

Chen Chi, Du Yiyang, Fang Zheng, Wang Ziyue, Luo Fuwen, Li Peng, Yan Ming, Zhang Ji, Huang Fei, Sun Maosong, Liu Yang. Arxiv 2024

[Paper]    
Merging Multimodal Models Tools Training Techniques

Recent developments in Multimodal Large Language Models (MLLMs) have shown rapid progress, moving towards the goal of creating versatile MLLMs that understand inputs from various modalities. However, existing methods typically rely on joint training with paired multimodal instruction data, which is resource-intensive and challenging to extend to new modalities. In this paper, we propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model. Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters. Furthermore, we introduce DAMC to address parameter interference and mismatch issues during the merging process, thereby enhancing the model performance. To facilitate research in this area, we propose MCUB, a benchmark for assessing ability of MLLMs to understand inputs from diverse modalities. Experiments on this benchmark and four other multimodal understanding tasks show significant improvements over baselines, proving that model composition can create a versatile model capable of processing inputs from multiple modalities.

Similar Work