Multimodal Sequential Generative Models For Semi-supervised Language Instruction Following · The Large Language Model Bible Contribute to LLM-Bible

Multimodal Sequential Generative Models For Semi-supervised Language Instruction Following

Akuzawa Kei, Iwasawa Yusuke, Matsuo Yutaka. Arxiv 2022

[Paper]    
Agentic Attention Mechanism Model Architecture Multimodal Models RAG Training Techniques Transformer

Agents that can follow language instructions are expected to be useful in a variety of situations such as navigation. However, training neural network-based agents requires numerous paired trajectories and languages. This paper proposes using multimodal generative models for semi-supervised learning in the instruction following tasks. The models learn a shared representation of the paired data, and enable semi-supervised learning by reconstructing unpaired data through the representation. Key challenges in applying the models to sequence-to-sequence tasks including instruction following are learning a shared representation of variable-length mulitimodal data and incorporating attention mechanisms. To address the problems, this paper proposes a novel network architecture to absorb the difference in the sequence lengths of the multimodal data. In addition, to further improve the performance, this paper shows how to incorporate the generative model-based approach with an existing semi-supervised method called a speaker-follower model, and proposes a regularization term that improves inference using unpaired trajectories. Experiments on BabyAI and Room-to-Room (R2R) environments show that the proposed method improves the performance of instruction following by leveraging unpaired data, and improves the performance of the speaker-follower model by 2% to 4% in R2R.

Similar Work