Iconqa: A New Benchmark For Abstract Diagram Understanding And Visual Language Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Iconqa: A New Benchmark For Abstract Diagram Understanding And Visual Language Reasoning

Lu Pan, Qiu Liang, Chen Jiaqi, Xia Tony, Zhao Yizhou, Zhang Wei, Yu Zhou, Liang Xiaodan, Zhu Song-chun. Arxiv 2021

[Paper]    
Applications Model Architecture Multimodal Models Pretraining Methods Reinforcement Learning Transformer

Current visual question answering (VQA) tasks mainly consider answering human-annotated questions for natural images. However, aside from natural images, abstract diagrams with semantic richness are still understudied in visual understanding and reasoning research. In this work, we introduce a new challenge of Icon Question Answering (IconQA) with the goal of answering a question in an icon image context. We release IconQA, a large-scale dataset that consists of 107,439 questions and three sub-tasks: multi-image-choice, multi-text-choice, and filling-in-the-blank. The IconQA dataset is inspired by real-world diagram word problems that highlight the importance of abstract diagram understanding and comprehensive cognitive reasoning. Thus, IconQA requires not only perception skills like object recognition and text understanding, but also diverse cognitive reasoning skills, such as geometric reasoning, commonsense reasoning, and arithmetic reasoning. To facilitate potential IconQA models to learn semantic representations for icon images, we further release an icon dataset Icon645 which contains 645,687 colored icons on 377 classes. We conduct extensive user studies and blind experiments and reproduce a wide range of advanced VQA methods to benchmark the IconQA task. Also, we develop a strong IconQA baseline Patch-TRM that applies a pyramid cross-modal Transformer with input diagram embeddings pre-trained on the icon dataset. IconQA and Icon645 are available at https://iconqa.github.io.

Similar Work