Advances Of Transformer-based Models For News Headline Generation · The Large Language Model Bible Contribute to LLM-Bible

Advances Of Transformer-based Models For News Headline Generation

Bukhtiyarov Alexey, Gusev Ilya. Arxiv 2020

[Paper]    
Applications Attention Mechanism BERT Model Architecture Pretraining Methods RAG Transformer

Pretrained language models based on Transformer architecture are the reason for recent breakthroughs in many areas of NLP, including sentiment analysis, question answering, named entity recognition. Headline generation is a special kind of text summarization task. Models need to have strong natural language understanding that goes beyond the meaning of individual words and sentences and an ability to distinguish essential information to succeed in it. In this paper, we fine-tune two pretrained Transformer-based models (mBART and BertSumAbs) for that task and achieve new state-of-the-art results on the RIA and Lenta datasets of Russian news. BertSumAbs increases ROUGE on average by 2.9 and 2.0 points respectively over previous best score achieved by Phrase-Based Attentional Transformer and CopyNet.

Similar Work