Cpsyexam: A Chinese Benchmark For Evaluating Psychology Using Examinations · The Large Language Model Bible Contribute to LLM-Bible

Cpsyexam: A Chinese Benchmark For Evaluating Psychology Using Examinations

Zhao Jiahao, Zhu Jingwei, Tan Minghuan, Yang Min, Yang Di, Zhang Chenhao, Ye Guancheng, Li Chengming, Hu Xiping. Arxiv 2024

[Paper]    
RAG Reinforcement Learning Tools Uncategorized

In this paper, we introduce a novel psychological benchmark, CPsyExam, constructed from questions sourced from Chinese language examinations. CPsyExam is designed to prioritize psychological knowledge and case analysis separately, recognizing the significance of applying psychological knowledge to real-world scenarios. From the pool of 22k questions, we utilize 4k to create the benchmark that offers balanced coverage of subjects and incorporates a diverse range of case analysis techniques.Furthermore, we evaluate a range of existing large language models~(LLMs), spanning from open-sourced to API-based models. Our experiments and analysis demonstrate that CPsyExam serves as an effective benchmark for enhancing the understanding of psychology within LLMs and enables the comparison of LLMs across various granularities.

Similar Work