Transformers Go For The Lols: Generating (humourous) Titles From Scientific Abstracts End-to-end · The Large Language Model Bible Contribute to LLM-Bible

Transformers Go For The Lols: Generating (humourous) Titles From Scientific Abstracts End-to-end

Chen Yanran, Eger Steffen. Arxiv 2022

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods Reinforcement Learning Training Techniques Transformer

We consider the end-to-end abstract-to-title generation problem, exploring seven recent transformer based models (including ChatGPT) fine-tuned on more than 30k abstract-title pairs from NLP and machine learning (ML) venues. As an extension, we also consider the harder problem of generating humorous paper titles. For the latter, we compile the first large-scale humor annotated dataset for scientific papers in the NLP/ML domains, comprising almost ~2.6k titles. We evaluate all models using human and automatic metrics. Our human evaluation suggests that our best end-to-end system performs similarly to human authors (but arguably slightly worse). Generating funny titles is more difficult, however, and our automatic systems clearly underperform relative to humans and often learn dataset artefacts of humor. Finally, ChatGPT, without any fine-tuning, performs on the level of our best fine-tuned system.

Similar Work