Openflamingo: An Open-source Framework For Training Large Autoregressive Vision-language Models · The Large Language Model Bible Contribute to LLM-Bible

Openflamingo: An Open-source Framework For Training Large Autoregressive Vision-language Models

Awadalla Anas, Gao Irena, Gardner Josh, Hessel Jack, Hanafy Yusuf, Zhu Wanrong, Marathe Kalyani, Bitton Yonatan, Gadre Samir, Sagawa Shiori, Jitsev Jenia, Kornblith Simon, Koh Pang Wei, Ilharco Gabriel, Wortsman Mitchell, Schmidt Ludwig. Arxiv 2023

[Paper] [Code]    
GPT Has Code Multimodal Models Pretraining Methods RAG Tools Training Techniques

We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMindā€™s Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.

Similar Work