Building A Llama2-finetuned LLM For Odia Language Utilizing Domain Knowledge Instruction Set · The Large Language Model Bible Contribute to LLM-Bible

Building A Llama2-finetuned LLM For Odia Language Utilizing Domain Knowledge Instruction Set

Kohli Guneet Singh, Parida Shantipriya, Sekhar Sambit, Saha Samirit, Nair Nipun B, Agarwal Parul, Khosla Sonal, Patiyal Kusumlata, Dhal Debasish. Arxiv 2023

[Paper]    
Fine Tuning Pretraining Methods Reinforcement Learning Training Techniques

Building LLMs for languages other than English is in great demand due to the unavailability and performance of multilingual LLMs, such as understanding the local context. The problem is critical for low-resource languages due to the need for instruction sets. In a multilingual country like India, there is a need for LLMs supporting Indic languages to provide generative AI and LLM-based technologies and services to its citizens. This paper presents our approach of i) generating a large Odia instruction set, including domain knowledge data suitable for LLM fine-tuning, and ii) building a Llama2-finetuned model tailored for enhanced performance in the Odia domain. The proposed work will help researchers build an instruction set and LLM, particularly for Indic languages. We will release the model and instruction set for the public for research and noncommercial purposes.

Similar Work