SHAQ: Single Headed Attention With Quasi-recurrence · The Large Language Model Bible Contribute to LLM-Bible

SHAQ: Single Headed Attention With Quasi-recurrence

Bharwani Nashwin, Kushner Warren, Dandona Sangeet, Schreiber Ben. Arxiv 2021

[Paper]    
Attention Mechanism Fine Tuning Model Architecture Pretraining Methods Training Techniques Transformer

Natural Language Processing research has recently been dominated by large scale transformer models. Although they achieve state of the art on many important language tasks, transformers often require expensive compute resources, and days spanning to weeks to train. This is feasible for researchers at big tech companies and leading research universities, but not for scrappy start-up founders, students, and independent researchers. Stephen Merity’s SHA-RNN, a compact, hybrid attention-RNN model, is designed for consumer-grade modeling as it requires significantly fewer parameters and less training time to reach near state of the art results. We analyze Merity’s model here through an exploratory model analysis over several units of the architecture considering both training time and overall quality in our assessment. Ultimately, we combine these findings into a new architecture which we call SHAQ: Single Headed Attention Quasi-recurrent Neural Network. With our new architecture we achieved similar accuracy results as the SHA-RNN while accomplishing a 4x speed boost in training.

Similar Work