Language Repository For Long Video Understanding · The Large Language Model Bible Contribute to LLM-Bible

Language Repository For Long Video Understanding

Kahatapitiya Kumara, Ranasinghe Kanchana, Park Jongwoo, Ryoo Michael S.. Arxiv 2024

[Paper] [Code]    
Applications Efficiency And Optimization Has Code Pruning Reinforcement Learning Tools

Language has become a prominent modality in computer vision with the rise of multi-modal LLMs. Despite supporting long context-lengths, their effectiveness in handling long-term information gradually declines with input length. This becomes critical, especially in applications such as long-form video understanding. In this paper, we introduce a Language Repository (LangRepo) for LLMs, that maintains concise and structured information as an interpretable (i.e., all-textual) representation. Our repository is updated iteratively based on multi-scale video chunks. We introduce write and read operations that focus on pruning redundancies in text, and extracting information at various temporal scales. The proposed framework is evaluated on zero-shot visual question-answering benchmarks including EgoSchema, NExT-QA, IntentQA and NExT-GQA, showing state-of-the-art performance at its scale. Our code is available at https://github.com/kkahatapitiya/LangRepo.

Similar Work