Pelle: Encoder-based Language Models For Brazilian Portuguese Based On Open Data · The Large Language Model Bible Contribute to LLM-Bible

Pelle: Encoder-based Language Models For Brazilian Portuguese Based On Open Data

De Mello Guilherme Lamartine, Finger Marcelo, Serras And Felipe, Carpi Miguel De Mello, Jose Marcos Menon, Domingues Pedro Henrique, Cavalim Paulo. Arxiv 2024

[Paper]    
BERT Model Architecture Pretraining Methods Training Techniques Transformer

In this paper we present PeLLE, a family of large language models based on the RoBERTa architecture, for Brazilian Portuguese, trained on curated, open data from the Carolina corpus. Aiming at reproducible results, we describe details of the pretraining of the models. We also evaluate PeLLE models against a set of existing multilingual and PT-BR refined pretrained Transformer-based LLM encoders, contrasting performance of large versus smaller-but-curated pretrained models in several downstream tasks. We conclude that several tasks perform better with larger models, but some tasks benefit from smaller-but-curated data in its pretraining.

Similar Work