Performance Of Recent Large Language Models For A Low-resourced Language · The Large Language Model Bible Contribute to LLM-Bible

Performance Of Recent Large Language Models For A Low-resourced Language

Jayakody Ravindu, Dias Gihan. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods Reinforcement Learning Training Techniques

Large Language Models (LLMs) have shown significant advances in the past year. In addition to new versions of GPT and Llama, several other LLMs have been introduced recently. Some of these are open models available for download and modification. Although multilingual large language models have been available for some time, their performance on low-resourced languages such as Sinhala has been poor. We evaluated four recent LLMs on their performance directly in the Sinhala language, and by translation to and from English. We also evaluated their fine-tunability with a small amount of fine-tuning data. Claude and GPT 4o perform well out-of-the-box and do significantly better than previous versions. Llama and Mistral perform poorly but show some promise of improvement with fine tuning.

Similar Work