M5 -- A Diverse Benchmark To Assess The Performance Of Large Multimodal Models Across Multilingual And Multicultural Vision-language Tasks · The Large Language Model Bible Contribute to LLM-Bible

M5 -- A Diverse Benchmark To Assess The Performance Of Large Multimodal Models Across Multilingual And Multicultural Vision-language Tasks

Schneider Florian, Sitaram Sunayana. Arxiv 2024

[Paper]    
GPT Model Architecture Multimodal Models Reinforcement Learning Tools

Since the release of ChatGPT, the field of Natural Language Processing has experienced rapid advancements, particularly in Large Language Models (LLMs) and their multimodal counterparts, Large Multimodal Models (LMMs). Despite their impressive capabilities, LLMs often exhibit significant performance disparities across different languages and cultural contexts, as demonstrated by various text-only benchmarks. However, current research lacks such benchmarks for multimodal visio-linguistic settings. This work fills this gap by introducing M5, the first comprehensive benchmark designed to evaluate LMMs on diverse vision-language tasks within a multilingual and multicultural context. M5 includes eight datasets covering five tasks and \(41\) languages, with a focus on underrepresented languages and culturally diverse images. Furthermore, we introduce two novel datasets, M5-VGR and M5-VLOD, including a new Visio-Linguistic Outlier Detection task, in which all evaluated open-source models fail to significantly surpass the random baseline. Through extensive evaluation and analyses, we highlight substantial task-agnostic performance disparities between high- and low-resource languages. Moreover, we show that larger models do not necessarily outperform smaller ones in a multilingual setting.

Similar Work