X-VILA: Cross-modality Alignment For Large Language Model · The Large Language Model Bible Contribute to LLM-Bible

X-VILA: Cross-modality Alignment For Large Language Model

Ye Hanrong, Huang De-an, Lu Yao, Yu Zhiding, Ping Wei, Tao Andrew, Kautz Jan, Han Song, Xu Dan, Molchanov Pavlo, Yin Hongxu. Arxiv 2024

[Paper]    
Merging Multimodal Models Reinforcement Learning Training Techniques

We introduce X-VILA, an omni-modality model designed to extend the capabilities of large language models (LLMs) by incorporating image, video, and audio modalities. By aligning modality-specific encoders with LLM inputs and diffusion decoders with LLM outputs, X-VILA achieves cross-modality understanding, reasoning, and generation. To facilitate this cross-modality alignment, we curate an effective interleaved any-to-any modality instruction-following dataset. Furthermore, we identify a significant problem with the current cross-modality alignment method, which results in visual information loss. To address the issue, we propose a visual alignment mechanism with a visual embedding highway module. We then introduce a resource-efficient recipe for training X-VILA, that exhibits proficiency in any-to-any modality conversation, surpassing previous approaches by large margins. X-VILA also showcases emergent properties across modalities even in the absence of similar training data. The project will be made open-source.

Similar Work