Towards A Performance Analysis On Pre-trained Visual Question Answering Models For Autonomous Driving · The Large Language Model Bible Contribute to LLM-Bible

Towards A Performance Analysis On Pre-trained Visual Question Answering Models For Autonomous Driving

Rekanar Kaavya, Eising CiarĂ¡n, Sistu Ganesh, Hayes Martin. Proceedings of the Irish Machine Vision and Image Processing Conference 2023

[Paper] [Code]    
Applications Attention Mechanism BERT Has Code Merging Model Architecture Multimodal Models Pretraining Methods Transformer

This short paper presents a preliminary analysis of three popular Visual Question Answering (VQA) models, namely ViLBERT, ViLT, and LXMERT, in the context of answering questions relating to driving scenarios. The performance of these models is evaluated by comparing the similarity of responses to reference answers provided by computer vision experts. Model selection is predicated on the analysis of transformer utilization in multimodal architectures. The results indicate that models incorporating cross-modal attention and late fusion techniques exhibit promising potential for generating improved answers within a driving perspective. This initial analysis serves as a launchpad for a forthcoming comprehensive comparative study involving nine VQA models and sets the scene for further investigations into the effectiveness of VQA model queries in self-driving scenarios. Supplementary material is available at https://github.com/KaavyaRekanar/Towards-a-performance-analysis-on-pre-trained-VQA-models-for-autonomous-driving.

Similar Work