Diversifying Query: Region-guided Transformer For Temporal Sentence Grounding · The Large Language Model Bible Contribute to LLM-Bible

Diversifying Query: Region-guided Transformer For Temporal Sentence Grounding

Sun Xiaolong, Shi Liushuai, Wang Le, Zhou Sanping, Xia Kun, Wang Yabing, Hua Gang. Arxiv 2024

[Paper]    
Efficiency And Optimization Model Architecture Pretraining Methods RAG Reinforcement Learning Transformer

Temporal sentence grounding is a challenging task that aims to localize the moment spans relevant to a language description. Although recent DETR-based models have achieved notable progress by leveraging multiple learnable moment queries, they suffer from overlapped and redundant proposals, leading to inaccurate predictions. We attribute this limitation to the lack of task-related guidance for the learnable queries to serve a specific mode. Furthermore, the complex solution space generated by variable and open-vocabulary language descriptions exacerbates the optimization difficulty, making it harder for learnable queries to distinguish each other adaptively. To tackle this limitation, we present a Region-Guided TRansformer (RGTR) for temporal sentence grounding, which diversifies moment queries to eliminate overlapped and redundant predictions. Instead of using learnable queries, RGTR adopts a set of anchor pairs as moment queries to introduce explicit regional guidance. Each anchor pair takes charge of moment prediction for a specific temporal region, which reduces the optimization difficulty and ensures the diversity of the final predictions. In addition, we design an IoU-aware scoring head to improve proposal quality. Extensive experiments demonstrate the effectiveness of RGTR, outperforming state-of-the-art methods on QVHighlights, Charades-STA and TACoS datasets.

Similar Work