Recent Advances In Natural Language Processing Via Large Pre-trained Language Models: A Survey · The Large Language Model Bible Contribute to LLM-Bible

Recent Advances In Natural Language Processing Via Large Pre-trained Language Models: A Survey

Min Bonan, Ross Hayley, Sulem Elior, Veyseh Amir Pouran Ben, Nguyen Thien Huu, Sainz Oscar, Agirre Eneko, Heinz Ilana, Roth Dan. Arxiv 2021

[Paper]    
Applications BERT Fine Tuning Language Modeling Model Architecture Pretraining Methods Prompting Survey Paper Training Techniques Transformer

Large, pre-trained transformer-based language models such as BERT have drastically changed the Natural Language Processing (NLP) field. We present a survey of recent work that uses these large language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches. We also present approaches that use pre-trained language models to generate data for training augmentation or other purposes. We conclude with discussions on limitations and suggested directions for future research.

Similar Work