Birbal: An Efficient 7B Instruct-model Fine-tuned With Curated Datasets · The Large Language Model Bible Contribute to LLM-Bible

Birbal: An Efficient 7B Instruct-model Fine-tuned With Curated Datasets

Jindal Ashvini Kumar, Rajpoot Pawan Kumar, Parikh Ankur. Arxiv 2024

[Paper]    
Efficiency And Optimization Ethics And Bias Fine Tuning Pretraining Methods Training Techniques

LLMOps incur significant costs due to hardware requirements, hindering their widespread accessibility. Additionally, a lack of transparency in model training methods and data contributes to the majority of models being non-reproducible. To tackle these challenges, the LLM Efficiency Challenge was introduced at NeurIPS Workshop, aiming to adapt foundation models on a diverse set of tasks via fine-tuning on a single GPU (RTX 4090 or A100 with 40GB) within a 24-hour timeframe. In this system description paper, we introduce Birbal, our Mistral-7B based winning model, fine-tuned on a single RTX 4090 for 16 hours. Birbal’s success lies in curating high-quality instructions covering diverse tasks, resulting in a 35% performance improvement over second-best Qwen-14B based submission.

Similar Work