Do Massively Pretrained Language Models Make Better Storytellers? · The Large Language Model Bible Contribute to LLM-Bible

Do Massively Pretrained Language Models Make Better Storytellers?

See Abigail, Pappu Aneesh, Saxena Rohun, Yerukola Akhila, Manning Christopher D.. Arxiv 2019

[Paper]    
Applications GPT Model Architecture

Large neural language models trained on massive amounts of text have emerged as a formidable strategy for Natural Language Understanding tasks. However, the strength of these models as Natural Language Generators is less clear. Though anecdotal evidence suggests that these models generate better quality text, there has been no detailed study characterizing their generation abilities. In this work, we compare the performance of an extensively pretrained model, OpenAI GPT2-117 (Radford et al., 2019), to a state-of-the-art neural story generation model (Fan et al., 2018). By evaluating the generated text across a wide variety of automatic metrics, we characterize the ways in which pretrained models do, and do not, make better storytellers. We find that although GPT2-117 conditions more strongly on context, is more sensitive to ordering of events, and uses more unusual words, it is just as likely to produce repetitive and under-diverse text when using likelihood-maximizing decoding algorithms.

Similar Work