Let Storytelling Tell Vivid Stories: An Expressive And Fluent Multimodal Storyteller · The Large Language Model Bible Contribute to LLM-Bible

Let Storytelling Tell Vivid Stories: An Expressive And Fluent Multimodal Storyteller

Zang Chuanqi, Tang Jiji, Zhang Rongsheng, Zhao Zeng, Lv Tangjie, Pei Mingtao, Liang Wei. Arxiv 2024

[Paper]    
Model Architecture Multimodal Models RAG

Storytelling aims to generate reasonable and vivid narratives based on an ordered image stream. The fidelity to the image story theme and the divergence of story plots attract readers to keep reading. Previous works iteratively improved the alignment of multiple modalities but ultimately resulted in the generation of simplistic storylines for image streams. In this work, we propose a new pipeline, termed LLaMS, to generate multimodal human-level stories that are embodied in expressiveness and consistency. Specifically, by fully exploiting the commonsense knowledge within the LLM, we first employ a sequence data auto-enhancement strategy to enhance factual content expression and leverage a textual reasoning architecture for expressive story generation and prediction. Secondly, we propose SQ-Adatpter module for story illustration generation which can maintain sequence consistency. Numerical results are conducted through human evaluation to verify the superiority of proposed LLaMS. Evaluations show that LLaMS achieves state-of-the-art storytelling performance and 86% correlation and 100% consistency win rate as compared with previous SOTA methods. Furthermore, ablation experiments are conducted to verify the effectiveness of proposed sequence data enhancement and SQ-Adapter.

Similar Work