Rocode: A Dataset For Measuring Code Intelligence From Problem Definitions In Romanian · The Large Language Model Bible Contribute to LLM-Bible

Rocode: A Dataset For Measuring Code Intelligence From Problem Definitions In Romanian

Cosma Adrian, Iordache Bogdan, Rosso Paolo. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Prompting Survey Paper Training Techniques

Recently, large language models (LLMs) have become increasingly powerful and have become capable of solving a plethora of tasks through proper instructions in natural language. However, the vast majority of testing suites assume that the instructions are written in English, the de facto prompting language. Code intelligence and problem solving still remain a difficult task, even for the most advanced LLMs. Currently, there are no datasets to measure the generalization power for code-generation models in a language other than English. In this work, we present RoCode, a competitive programming dataset, consisting of 2,642 problems written in Romanian, 11k solutions in C, C++ and Python and comprehensive testing suites for each problem. The purpose of RoCode is to provide a benchmark for evaluating the code intelligence of language models trained on Romanian / multilingual text as well as a fine-tuning set for pretrained Romanian models. Through our results and review of related works, we argue for the need to develop code models for languages other than English.

Similar Work