Aya Dataset: An Open-access Collection For Multilingual Instruction Tuning · The Large Language Model Bible Contribute to LLM-Bible

Aya Dataset: An Open-access Collection For Multilingual Instruction Tuning

Singh Shivalika, Vargus Freddie, Dsouza Daniel, Karlsson Börje F., Mahendiran Abinaya, Ko Wei-yin, Shandilya Herumb, Patel Jay, Mataciunas Deividas, Omahony Laura, Zhang Mike, Hettiarachchi Ramith, Wilson Joseph, Machado Marina, Moura Luisa Souza, Krzemiński Dominik, Fadaei Hakimeh, Ergün Irem, Okoh Ifeoma, Alaagib Aisha, Mudannayake Oshan, Alyafeai Zaid, Chien Vu Minh, Ruder Sebastian, Guthikonda Surya, Alghamdi Emad A., Gehrmann Sebastian, Muennighoff Niklas, Bartolo Max, Kreutzer Julia, Üstün Ahmet, Fadaee Marzieh, Hooker Sara. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Reinforcement Learning Tools Training Techniques

Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the finetuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and translating existing datasets across 114 languages. In total, we contribute four key resources: we develop and open-source the Aya Annotation Platform, the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as a valuable framework for future research collaborations that aim to bridge gaps in resources.

Similar Work