Visualwebbench: How Far Have Multimodal Llms Evolved In Web Page Understanding And Grounding? · The Large Language Model Bible Contribute to LLM-Bible

Visualwebbench: How Far Have Multimodal Llms Evolved In Web Page Understanding And Grounding?

Liu Junpeng, Song Yifan, Lin Bill Yuchen, Lam Wai, Neubig Graham, Li Yuanzhi, Yue Xiang. Arxiv 2024

[Paper]    
Agentic Applications GPT Model Architecture Multimodal Models

Multimodal Large Language models (MLLMs) have shown promise in web-related tasks, but evaluating their performance in the web domain remains a challenge due to the lack of comprehensive benchmarks. Existing benchmarks are either designed for general multimodal tasks, failing to capture the unique characteristics of web pages, or focus on end-to-end web agent tasks, unable to measure fine-grained abilities such as OCR, understanding, and grounding. In this paper, we introduce \bench{}, a multimodal benchmark designed to assess the capabilities of MLLMs across a variety of web tasks. \bench{} consists of seven tasks, and comprises 1.5K human-curated instances from 139 real websites, covering 87 sub-domains. We evaluate 14 open-source MLLMs, Gemini Pro, Claude-3 series, and GPT-4V(ision) on \bench{}, revealing significant challenges and performance gaps. Further analysis highlights the limitations of current MLLMs, including inadequate grounding in text-rich environments and subpar performance with low-resolution image inputs. We believe \bench{} will serve as a valuable resource for the research community and contribute to the creation of more powerful and versatile MLLMs for web-related applications.

Similar Work