Regularizing Transformers With Deep Probabilistic Layers · The Large Language Model Bible Contribute to LLM-Bible

Regularizing Transformers With Deep Probabilistic Layers

Aguilera Aurora Cobo, Olmos Pablo Martínez, Artés-rodríguez Antonio, Pérez-cruz Fernando. Arxiv 2021

[Paper]    
Attention Mechanism BERT Model Architecture Pretraining Methods Transformer

Language models (LM) have grown with non-stop in the last decade, from sequence-to-sequence architectures to the state-of-the-art and utter attention-based Transformers. In this work, we demonstrate how the inclusion of deep generative models within BERT can bring more versatile models, able to impute missing/noisy words with richer text or even improve BLEU score. More precisely, we use a Gaussian Mixture Variational Autoencoder (GMVAE) as a regularizer layer and prove its effectiveness not only in Transformers but also in the most relevant encoder-decoder based LM, seq2seq with and without attention.

Similar Work