Txt: Crossmodal End-to-end Learning With Transformers · The Large Language Model Bible Contribute to LLM-Bible

Txt: Crossmodal End-to-end Learning With Transformers

Steitz Jan-martin O., Pfeiffer Jonas, Gurevych Iryna, Roth Stefan. Arxiv 2021

[Paper]    
Applications Fine Tuning Model Architecture Multimodal Models Pretraining Methods RAG Reinforcement Learning Training Techniques Transformer

Reasoning over multiple modalities, e.g. in Visual Question Answering (VQA), requires an alignment of semantic concepts across domains. Despite the widespread success of end-to-end learning, today’s multimodal pipelines by and large leverage pre-extracted, fixed features from object detectors, typically Faster R-CNN, as representations of the visual world. The obvious downside is that the visual representation is not specifically tuned to the multimodal task at hand. At the same time, while transformer-based object detectors have gained popularity, they have not been employed in today’s multimodal pipelines. We address both shortcomings with TxT, a transformer-based crossmodal pipeline that enables fine-tuning both language and visual components on the downstream task in a fully end-to-end manner. We overcome existing limitations of transformer-based detectors for multimodal reasoning regarding the integration of global context and their scalability. Our transformer-based multimodal model achieves considerable gains from end-to-end learning for multimodal question answering.

Similar Work