Bridging Text And Video: A Universal Multimodal Transformer For Video-audio Scene-aware Dialog · The Large Language Model Bible Contribute to LLM-Bible

Bridging Text And Video: A Universal Multimodal Transformer For Video-audio Scene-aware Dialog

Li Zekang, Li Zongjia, Zhang Jinchao, Feng Yang, Niu Cheng, Zhou Jie. Arxiv 2020

[Paper]    
Model Architecture Multimodal Models Pretraining Methods Transformer

Audio-Visual Scene-Aware Dialog (AVSD) is a task to generate responses when chatting about a given video, which is organized as a track of the 8th Dialog System Technology Challenge (DSTC8). To solve the task, we propose a universal multimodal transformer and introduce the multi-task learning method to learn joint representations among different modalities as well as generate informative and fluent responses. Our method extends the natural language generation pre-trained model to multimodal dialogue generation task. Our system achieves the best performance in both objective and subjective evaluations in the challenge.

Similar Work