Recurrent And Contextual Models For Visual Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Recurrent And Contextual Models For Visual Question Answering

Sharang Abhijit, Lau Eric. Arxiv 2017

[Paper]    
Applications Attention Mechanism Ethics And Bias Language Modeling Model Architecture

We propose a series of recurrent and contextual neural network models for multiple choice visual question answering on the Visual7W dataset. Motivated by divergent trends in model complexities in the literature, we explore the balance between model expressiveness and simplicity by studying incrementally more complex architectures. We start with LSTM-encoding of input questions and answers; build on this with context generation by LSTM-encodings of neural image and question representations and attention over images; and evaluate the diversity and predictive power of our models and the ensemble thereof. All models are evaluated against a simple baseline inspired by the current state-of-the-art, consisting of involving simple concatenation of bag-of-words and CNN representations for the text and images, respectively. Generally, we observe marked variation in image-reasoning performance between our models not obvious from their overall performance, as well as evidence of dataset bias. Our standalone models achieve accuracies up to \(64.6%\), while the ensemble of all models achieves the best accuracy of \(66.67%\), within \(0.5%\) of the current state-of-the-art for Visual7W.

Similar Work