CMMLU: Measuring Massive Multitask Language Understanding In Chinese · The Large Language Model Bible Contribute to LLM-Bible

CMMLU: Measuring Massive Multitask Language Understanding In Chinese

Li Haonan, Zhang Yixuan, Koto Fajri, Yang Yifei, Zhao Hai, Gong Yeyun, Duan Nan, Baldwin Timothy. Arxiv 2023

[Paper]    
Prompting RAG Uncategorized

As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models’ performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.

Similar Work