Open Source Conversational Llms Do Not Know Most Spanish Words · The Large Language Model Bible Contribute to LLM-Bible

Open Source Conversational Llms Do Not Know Most Spanish Words

Conde Javier, González Miguel, Melero Nina, Ferrando Raquel, Martínez Gonzalo, Merino-gómez Elena, Hernández José Alberto, Reviriego Pedro. SEPLN Journal 2024

[Paper]    
Attention Mechanism Bias Mitigation Ethics And Bias Fairness Model Architecture

The growing interest in Large Language Models (LLMs) and in particular in conversational models with which users can interact has led to the development of a large number of open-source chat LLMs. These models are evaluated on a wide range of benchmarks to assess their capabilities in answering questions or solving problems on almost any possible topic or to test their ability to reason or interpret texts. Instead, the evaluation of the knowledge that these models have of the languages has received much less attention. For example, the words that they can recognize and use in different languages. In this paper, we evaluate the knowledge that open-source chat LLMs have of Spanish words by testing a sample of words in a reference dictionary. The results show that open-source chat LLMs produce incorrect meanings for an important fraction of the words and are not able to use most of the words correctly to write sentences with context. These results show how Spanish is left behind in the open-source LLM race and highlight the need to push for linguistic fairness in conversational LLMs ensuring that they provide similar performance across languages.

Similar Work