Ptt5-v2: A Closer Look At Continued Pretraining Of T5 Models For The Portuguese Language · The Large Language Model Bible Contribute to LLM-Bible

Ptt5-v2: A Closer Look At Continued Pretraining Of T5 Models For The Portuguese Language

Piau Marcos, Lotufo Roberto, Nogueira Rodrigo. Arxiv 2024

[Paper]    
Efficiency And Optimization Pretraining Methods Training Techniques

Despite advancements in Natural Language Processing (NLP) and the growing availability of pretrained models, the English language remains the primary focus of model development. Continued pretraining on language-specific corpora provides a practical solution for adapting models to other languages. However, the impact of different pretraining settings on downstream tasks remains underexplored. This work introduces \(\texttt{ptt5-v2}\), investigating the continued pretraining of T5 models for Portuguese. We first develop a baseline set of settings and pretrain models with sizes up to 3B parameters. Finetuning on three Portuguese downstream tasks (assin2 STS, assin2 RTE, and TweetSentBR) yields SOTA results on the latter two. We then explore the effects of different pretraining configurations, including quality filters, optimization strategies, and multi-epoch pretraining. Perhaps surprisingly, their impact remains subtle compared to our baseline. We release \(\texttt{ptt5-v2}\) pretrained checkpoints and the finetuned MonoT5 rerankers on HuggingFace at https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0 and https://huggingface.co/collections/unicamp-dl/monoptt5-66653981877df3ea727f720d.

Similar Work