AMMUS : A Survey Of Transformer-based Pretrained Models In Natural Language Processing · The Large Language Model Bible Contribute to LLM-Bible

AMMUS : A Survey Of Transformer-based Pretrained Models In Natural Language Processing

Kalyan Katikapalli Subramanyam, Rajasekharan Ajit, Sangeetha Sivanesan. Arxiv 2021

[Paper]    
BERT Fine Tuning GPT Model Architecture Pretraining Methods Survey Paper Training Techniques Transformer

Transformer-based pretrained language models (T-PTLMs) have achieved great success in almost every NLP task. The evolution of these models started with GPT and BERT. These models are built on the top of transformers, self-supervised learning and transfer learning. Transformed-based PTLMs learn universal language representations from large volumes of text data using self-supervised learning and transfer this knowledge to downstream tasks. These models provide good background knowledge to downstream tasks which avoids training of downstream models from scratch. In this comprehensive survey paper, we initially give a brief overview of self-supervised learning. Next, we explain various core concepts like pretraining, pretraining methods, pretraining tasks, embeddings and downstream adaptation methods. Next, we present a new taxonomy of T-PTLMs and then give brief overview of various benchmarks including both intrinsic and extrinsic. We present a summary of various useful libraries to work with T-PTLMs. Finally, we highlight some of the future research directions which will further improve these models. We strongly believe that this comprehensive survey paper will serve as a good reference to learn the core concepts as well as to stay updated with the recent happenings in T-PTLMs.

Similar Work