Ivqa: Inverse Visual Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Ivqa: Inverse Visual Question Answering

Liu Feng, Xiang Tao, Hospedales Timothy M., Yang Wankou, Sun Changyin. Arxiv 2017

[Paper]    
Applications Attention Mechanism Ethics And Bias Model Architecture

We propose the inverse problem of Visual question answering (iVQA), and explore its suitability as a benchmark for visuo-linguistic understanding. The iVQA task is to generate a question that corresponds to a given image and answer pair. Since the answers are less informative than the questions, and the questions have less learnable bias, an iVQA model needs to better understand the image to be successful than a VQA model. We pose question generation as a multi-modal dynamic inference process and propose an iVQA model that can gradually adjust its focus of attention guided by both a partially generated question and the answer. For evaluation, apart from existing linguistic metrics, we propose a new ranking metric. This metric compares the ground truth question’s rank among a list of distractors, which allows the drawbacks of different algorithms and sources of error to be studied. Experimental results show that our model can generate diverse, grammatically correct and content correlated questions that match the given answer.

Similar Work