Evaluating Zero-shot GPT-4V Performance On 3D Visual Question Answering Benchmarks · The Large Language Model Bible Contribute to LLM-Bible

Evaluating Zero-shot GPT-4V Performance On 3D Visual Question Answering Benchmarks

Singh Simranjit, Pavlakos Georgios, Stamoulis Dimitrios. Arxiv 2024

[Paper]    
Agentic Applications Fine Tuning GPT Model Architecture Pretraining Methods Training Techniques

As interest in “reformulating” the 3D Visual Question Answering (VQA) problem in the context of foundation models grows, it is imperative to assess how these new paradigms influence existing closed-vocabulary datasets. In this case study, we evaluate the zero-shot performance of foundational models (GPT-4 Vision and GPT-4) on well-established 3D VQA benchmarks, namely 3D-VQA and ScanQA. We provide an investigation to contextualize the performance of GPT-based agents relative to traditional modeling approaches. We find that GPT-based agents without any fine-tuning perform on par with the closed vocabulary approaches. Our findings corroborate recent results that “blind” models establish a surprisingly strong baseline in closed-vocabulary settings. We demonstrate that agents benefit significantly from scene-specific vocabulary via in-context textual grounding. By presenting a preliminary comparison with previous baselines, we hope to inform the community’s ongoing efforts to refine multi-modal 3D benchmarks.

Similar Work