Llavaolmobitnet1b: Ternary LLM Goes Multimodal! · The Large Language Model Bible Contribute to LLM-Bible

Llavaolmobitnet1b: Ternary LLM Goes Multimodal!

Sundaram Jainaveen, Iyer Ravi. Arxiv 2024

[Paper] [Code]    
Has Code Multimodal Models RAG Reinforcement Learning Training Techniques

Multimodal Large Language Models (MM-LLMs) have seen significant advancements in the last year, demonstrating impressive performance across tasks. However, to truly democratize AI, models must exhibit strong capabilities and be able to run efficiently on small compute footprints accessible by most. Part of this quest, we introduce LLaVaOLMoBitnet1B - the first Ternary Multimodal LLM capable of accepting Image(s)+Text inputs to produce coherent textual responses. The model is fully open-sourced along with training scripts to encourage further research in this space. This accompanying technical report highlights the training process, evaluation details, challenges associated with ternary models and future opportunities. Link to the model: https://huggingface.co/IntelLabs/LlavaOLMoBitnet1B

Similar Work