SWAG: Storytelling With Action Guidance · The Large Language Model Bible Contribute to LLM-Bible

SWAG: Storytelling With Action Guidance

Patel Zeeshan, El-refai Karim, Pei Jonathan, Li Tianle. Arxiv 2024

[Paper]    
GPT Model Architecture Uncategorized

Automated long-form story generation typically employs long-context large language models (LLMs) for one-shot creation, which can produce cohesive but not necessarily engaging content. We introduce Storytelling With Action Guidance (SWAG), a novel approach to storytelling with LLMs. Our approach reduces story writing to a search problem through a two-model feedback loop: one LLM generates story content, and another auxiliary LLM is used to choose the next best “action” to steer the story’s future direction. Our results show that SWAG can substantially outperform previous end-to-end story generation techniques when evaluated by GPT-4 and through human evaluation, and our SWAG pipeline using only open-source models surpasses GPT-3.5-Turbo.

Similar Work