Efficient Adaptation Of Pretrained Transformers For Abstractive Summarization · The Large Language Model Bible Contribute to LLM-Bible

Efficient Adaptation Of Pretrained Transformers For Abstractive Summarization

Hoang Andrew, Bosselut Antoine, Celikyilmaz Asli, Choi Yejin. Arxiv 2019

[Paper]    
Applications Model Architecture Pretraining Methods Training Techniques Transformer

Large-scale learning of transformer language models has yielded improvements on a variety of natural language understanding tasks. Whether they can be effectively adapted for summarization, however, has been less explored, as the learned representations are less seamlessly integrated into existing neural text production architectures. In this work, we propose two solutions for efficiently adapting pretrained transformer language models as text summarizers: source embeddings and domain-adaptive training. We test these solutions on three abstractive summarization datasets, achieving new state of the art performance on two of them. Finally, we show that these improvements are achieved by producing more focused summaries with fewer superfluous and that performance improvements are more pronounced on more abstractive datasets.

Similar Work