Moviellm: Enhancing Long Video Understanding With Ai-generated Movies · The Large Language Model Bible Contribute to LLM-Bible

Moviellm: Enhancing Long Video Understanding With Ai-generated Movies

Song Zhende, Wang Chenchen, Sheng Jiamu, Zhang Chi, Yu Gang, Fan Jiayuan, Chen Tao. Arxiv 2024

[Paper]    
Applications Ethics And Bias GPT Language Modeling Model Architecture Multimodal Models Tools

Development of multimodal models has marked a significant step forward in how machines understand videos. These models have shown promise in analyzing short video clips. However, when it comes to longer formats like movies, they often fall short. The main hurdles are the lack of high-quality, diverse video data and the intensive work required to collect or annotate such data. In face of these challenges, we propose MovieLLM, a novel framework designed to synthesize consistent and high-quality video data for instruction tuning. The pipeline is carefully designed to control the style of videos by improving textual inversion technique with powerful text generation capability of GPT-4. As the first framework to do such thing, our approach stands out for its flexibility and scalability, empowering users to create customized movies with only one description. This makes it a superior alternative to traditional data collection methods. Our extensive experiments validate that the data produced by MovieLLM significantly improves the performance of multimodal models in understanding complex video narratives, overcoming the limitations of existing datasets regarding scarcity and bias.

Similar Work