Openflamingo: An Open-source Framework For Training Large Autoregressive Vision-language Models · The Large Language Model Bible Contribute to LLM-Bible

Openflamingo: An Open-source Framework For Training Large Autoregressive Vision-language Models

Anas Awadalla et al.. Arxiv 2023 – 43 citations

[Paper] [Code]    
GPT Training Techniques RAG Tools Has Code Multimodal Models

We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind’s Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.

Similar Work