Olympicarena: Benchmarking Multi-discipline Cognitive Reasoning For Superintelligent AI · The Large Language Model Bible Contribute to LLM-Bible

Olympicarena: Benchmarking Multi-discipline Cognitive Reasoning For Superintelligent AI

Huang Zhen, Wang Zengzhi, Xia Shijie, Li Xuefeng, Zou Haoyang, Xu Ruijie, Fan Run-ze, Ye Lyumanshan, Chern Ethan, Ye Yixin, Zhang Yikai, Yang Yuqing, Wu Ting, Wang Binjie, Sun Shichao, Xiao Yang, Li Yiyuan, Zhou Fan, Chern Steffi, Qin Yiwei, Ma Yan, Su Jiadi, Liu Yixiu, Zheng Yuxiang, Zhang Shaoting, Lin Dahua, Qiao Yu, Liu Pengfei. Arxiv 2024

[Paper]    
GPT Model Architecture Multimodal Models Reinforcement Learning Tools

The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and scientific discovery (i.e., AI4Science) once exclusive to human intellect. To comprehensively evaluate current models’ performance in cognitive reasoning abilities, we introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities. These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage. We argue that the challenges in Olympic competition problems are ideal for evaluating AI’s cognitive reasoning due to their complexity and interdisciplinary nature, which are essential for tackling complex scientific challenges and facilitating discoveries. Beyond evaluating performance across various disciplines using answer-only criteria, we conduct detailed experiments and analyses from multiple perspectives. We delve into the models’ cognitive reasoning abilities, their performance across different modalities, and their outcomes in process-level evaluations, which are vital for tasks requiring complex reasoning with lengthy solutions. Our extensive evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy, illustrating current AI limitations in complex reasoning and multimodal integration. Through the OlympicArena, we aim to advance AI towards superintelligence, equipping it to address more complex challenges in science and beyond. We also provide a comprehensive set of resources to support AI research, including a benchmark dataset, an open-source annotation platform, a detailed evaluation tool, and a leaderboard with automatic submission features.

Similar Work