Introducing Bode: A Fine-tuned Large Language Model For Portuguese Prompt-based Task · The Large Language Model Bible Contribute to LLM-Bible

Introducing Bode: A Fine-tuned Large Language Model For Portuguese Prompt-based Task

Garcia Gabriel Lino, Paiola Pedro Henrique, Morelli Luis Henrique, Candido Giovani, Júnior Arnaldo Cândido, Jodas Danilo Samuel, Afonso Luis C. S., Guilherme Ivan Rizzo, Penteado Bruno Elias, Papa João Paulo. Arxiv 2024

[Paper]    
In Context Learning Prompting

Large Language Models (LLMs) are increasingly bringing advances to Natural Language Processing. However, low-resource languages, those lacking extensive prominence in datasets for various NLP tasks, or where existing datasets are not as substantial, such as Portuguese, already obtain several benefits from LLMs, but not to the same extent. LLMs trained on multilingual datasets normally struggle to respond to prompts in Portuguese satisfactorily, presenting, for example, code switching in their responses. This work proposes a fine-tuned LLaMA 2-based model for Portuguese prompts named Bode in two versions: 7B and 13B. We evaluate the performance of this model in classification tasks using the zero-shot approach with in-context learning, and compare it with other LLMs. Our main contribution is to bring an LLM with satisfactory results in the Portuguese language, as well as to provide a model that is free for research or commercial purposes.

Similar Work