From Bytes To Borsch: Fine-tuning Gemma And Mistral For The Ukrainian Language Representation · The Large Language Model Bible Contribute to LLM-Bible

From Bytes To Borsch: Fine-tuning Gemma And Mistral For The Ukrainian Language Representation

Kiulian Artur, Polishko Anton, Khandoga Mykola, Chubych Oryna, Connor Jack, Ravishankar Raghav, Shirawalmath Adarsh. Arxiv 2024

[Paper]    
Ethics And Bias Fine Tuning Pretraining Methods RAG Tools Training Techniques

In the rapidly advancing field of AI and NLP, generative large language models (LLMs) stand at the forefront of innovation, showcasing unparalleled abilities in text understanding and generation. However, the limited representation of low-resource languages like Ukrainian poses a notable challenge, restricting the reach and relevance of this technology. Our paper addresses this by fine-tuning the open-source Gemma and Mistral LLMs with Ukrainian datasets, aiming to improve their linguistic proficiency and benchmarking them against other existing models capable of processing Ukrainian language. This endeavor not only aims to mitigate language bias in technology but also promotes inclusivity in the digital realm. Our transparent and reproducible approach encourages further NLP research and development. Additionally, we present the Ukrainian Knowledge and Instruction Dataset (UKID) to aid future efforts in language model fine-tuning. Our research not only advances the field of NLP but also highlights the importance of linguistic diversity in AI, which is crucial for cultural preservation, education, and expanding AI’s global utility. Ultimately, we advocate for a future where technology is inclusive, enabling AI to communicate effectively across all languages, especially those currently underrepresented.

Similar Work