Not All Layers Are Equally As Important: Every Layer Counts BERT · The Large Language Model Bible Contribute to LLM-Bible

Not All Layers Are Equally As Important: Every Layer Counts BERT

Charpentier Lucas Georges Gabriel, Samuel David. Arxiv 2023

[Paper]    
BERT Model Architecture Pretraining Methods Training Techniques Transformer

This paper introduces a novel modification of the transformer architecture, tailored for the data-efficient pretraining of language models. This aspect is evaluated by participating in the BabyLM challenge, where our solution won both the strict and strict-small tracks. Our approach allows each transformer layer to select which outputs of previous layers to process. The empirical results verify the potential of this simple modification and show that not all layers are equally as important.

Similar Work