Probing Representations Learned By Multimodal Recurrent And Transformer Models · The Large Language Model Bible Contribute to LLM-Bible

Probing Representations Learned By Multimodal Recurrent And Transformer Models

Libovický Jindřich, Madhyastha Pranava. Arxiv 2019

[Paper]    
Applications Language Modeling Model Architecture Multimodal Models Pretraining Methods Training Techniques Transformer

Recent literature shows that large-scale language modeling provides excellent reusable sentence representations with both recurrent and self-attentive architectures. However, there has been less clarity on the commonalities and differences in the representational properties induced by the two architectures. It also has been shown that visual information serves as one of the means for grounding sentence representations. In this paper, we present a meta-study assessing the representational quality of models where the training signal is obtained from different modalities, in particular, language modeling, image features prediction, and both textual and multimodal machine translation. We evaluate textual and visual features of sentence representations obtained using predominant approaches on image retrieval and semantic textual similarity. Our experiments reveal that on moderate-sized datasets, a sentence counterpart in a target language or visual modality provides much stronger training signal for sentence representation than language modeling. Importantly, we observe that while the Transformer models achieve superior machine translation quality, representations from the recurrent neural network based models perform significantly better over tasks focused on semantic relevance.

Similar Work