H2o-danube-1.8b Technical Report · The Large Language Model Bible Contribute to LLM-Bible

H2o-danube-1.8b Technical Report

Singer Philipp, Pfeiffer Pascal, Babakhin Yauhen, Jeblick Maximilian, Dhankhar Nischay, Fodor Gabor, Ambati Sri Satish. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning Pretraining Methods RAG Reinforcement Learning Training Techniques

We present H2O-Danube, a series of small 1.8B language models consisting of H2O-Danube-1.8B, trained on 1T tokens, and the incremental improved H2O-Danube2-1.8B trained on an additional 2T tokens. Our models exhibit highly competitive metrics across a multitude of benchmarks and, as of the time of this writing, H2O-Danube2-1.8B achieves the top ranking on Open LLM Leaderboard for all models below the 2B parameter range. The models follow core principles of LLama 2 and Mistral, and we leverage and refine various techniques for pre-training large language models. We additionally release chat models trained with supervised fine-tuning followed by direct preference optimization. We make all models openly available under Apache 2.0 license further democratizing LLMs to a wider audience economically.

Similar Work