Understanding BLOOM: An Empirical Study On Diverse NLP Tasks · The Large Language Model Bible Contribute to LLM-Bible

Understanding BLOOM: An Empirical Study On Diverse NLP Tasks

Dakle Parag Pravin, Rallabandi Saikrishna, Raghavan Preethi. Arxiv 2022

[Paper]    
Applications BERT Fine Tuning GPT Language Modeling Model Architecture Pretraining Methods Prompting Reinforcement Learning Training Techniques

We view the landscape of large language models (LLMs) through the lens of the recently released BLOOM model to understand the performance of BLOOM and other decoder-only LLMs compared to BERT-style encoder-only models. We achieve this by evaluating the smaller BLOOM model variants (\textit{350m/560m} and \textit{1b3/1b7}) on several NLP benchmark datasets and popular leaderboards. We make the following observations: (1) BLOOM performance does not scale with parameter size, unlike other LLMs like GPT and BERT. Experiments fine-tuning BLOOM models show that the 560m variant performs similarly to or better than the 1b7 variant, (2) Zero-shot cross-lingual and multi-lingual fine-tuning experiments show that BLOOM is at par or worse than monolingual GPT-2 models, and (3) Toxicity analysis of prompt-based text generation using the RealToxicityPrompts dataset shows that the text generated by BLOOM is at least 17% less toxic than GPT-2 and GPT-3 models.

Similar Work