Superclue: A Comprehensive Chinese Large Language Model Benchmark · The Large Language Model Bible Contribute to LLM-Bible

Superclue: A Comprehensive Chinese Large Language Model Benchmark

Xu Liang, Li Anqi, Zhu Lei, Xue Hang, Zhu Changtai, Zhao Kangkang, He Haonan, Zhang Xuanwei, Kang Qiyue, Lan Zhenzhong. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Reinforcement Learning Tools

Large language models (LLMs) have shown the potential to be integrated into human daily lives. Therefore, user preference is the most critical criterion for assessing LLMs’ performance in real-world scenarios. However, existing benchmarks mainly focus on measuring models’ accuracy using multi-choice questions, which limits the understanding of their capabilities in real applications. We fill this gap by proposing a comprehensive Chinese benchmark SuperCLUE, named after another popular Chinese LLM benchmark CLUE. SuperCLUE encompasses three sub-tasks: actual users’ queries and ratings derived from an LLM battle platform (CArena), open-ended questions with single and multiple-turn dialogues (OPEN), and closed-ended questions with the same stems as open-ended single-turn ones (CLOSE). Our study shows that accuracy on closed-ended questions is insufficient to reflect human preferences achieved on open-ended ones. At the same time, they can complement each other to predict actual user preferences. We also demonstrate that GPT-4 is a reliable judge to automatically evaluate human preferences on open-ended questions in a Chinese context. Our benchmark will be released at https://www.CLUEbenchmarks.com

Similar Work