Adding Recurrence To Pretrained Transformers For Improved Efficiency And Context Size · The Large Language Model Bible Contribute to LLM-Bible

Adding Recurrence To Pretrained Transformers For Improved Efficiency And Context Size

Yoshida Davis, Ettinger Allyson, Gimpel Kevin. Arxiv 2020

[Paper]    
Efficiency And Optimization Fine Tuning GPT Model Architecture Pretraining Methods Training Techniques Transformer

Fine-tuning a pretrained transformer for a downstream task has become a standard method in NLP in the last few years. While the results from these models are impressive, applying them can be extremely computationally expensive, as is pretraining new models with the latest architectures. We present a novel method for applying pretrained transformer language models which lowers their memory requirement both at training and inference time. An additional benefit is that our method removes the fixed context size constraint that most transformer models have, allowing for more flexible use. When applied to the GPT-2 language model, we find that our method attains better perplexity than an unmodified GPT-2 model on the PG-19 and WikiText-103 corpora, for a given amount of computation or memory.

Similar Work