Life After BERT: What Do Other Muppets Understand About Language? · The Large Language Model Bible Contribute to LLM-Bible

Life After BERT: What Do Other Muppets Understand About Language?

Lialin Vladislav, Zhao Kevin, Shivagunde Namrata, Rumshisky Anna. Proceedings of the 2022

[Paper]    
BERT GPT Language Modeling Model Architecture Pretraining Methods Reinforcement Learning Training Techniques Transformer

Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics benchmark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT. Additionally, we adapt the oLMpics zero-shot setup for autoregressive models and evaluate GPT networks of different sizes. Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives. Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model’s linguistic capabilities.

Similar Work