ANOLE: An Open, Autoregressive, Native Large Multimodal Models For Interleaved Image-text Generation · The Large Language Model Bible Contribute to LLM-Bible

ANOLE: An Open, Autoregressive, Native Large Multimodal Models For Interleaved Image-text Generation

Chern Ethan, Su Jiadi, Ma Yan, Liu Pengfei. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Language Modeling Merging Multimodal Models Pretraining Methods Reinforcement Learning Tools Training Techniques

Previous open-source large multimodal models (LMMs) have faced several limitations: (1) they often lack native integration, requiring adapters to align visual representations with pre-trained large language models (LLMs); (2) many are restricted to single-modal generation; (3) while some support multimodal generation, they rely on separate diffusion models for visual modeling and generation. To mitigate these limitations, we present Anole, an open, autoregressive, native large multimodal model for interleaved image-text generation. We build Anole from Meta AI’s Chameleon, adopting an innovative fine-tuning strategy that is both data-efficient and parameter-efficient. Anole demonstrates high-quality, coherent multimodal generation capabilities. We have open-sourced our model, training framework, and instruction tuning data.

Similar Work