Cpsdbench: A Large Language Model Evaluation Benchmark And Baseline For Chinese Public Security Domain · The Large Language Model Bible Contribute to LLM-Bible

Cpsdbench: A Large Language Model Evaluation Benchmark And Baseline For Chinese Public Security Domain

Tong Xin, Jin Bo, Lin Zhi, Wang Binjun, Yu Ting, Cheng Qiang. Arxiv 2024

[Paper]    
Applications Language Modeling Reinforcement Learning Security

Large Language Models (LLMs) have demonstrated significant potential and effectiveness across multiple application domains. To assess the performance of mainstream LLMs in public security tasks, this study aims to construct a specialized evaluation benchmark tailored to the Chinese public security domain–CPSDbench. CPSDbench integrates datasets related to public security collected from real-world scenarios, supporting a comprehensive assessment of LLMs across four key dimensions: text classification, information extraction, question answering, and text generation. Furthermore, this study introduces a set of innovative evaluation metrics designed to more precisely quantify the efficacy of LLMs in executing tasks related to public security. Through the in-depth analysis and evaluation conducted in this research, we not only enhance our understanding of the performance strengths and limitations of existing models in addressing public security issues but also provide references for the future development of more accurate and customized LLM models targeted at applications in this field.

Similar Work