Sc-safety: A Multi-round Open-ended Question Adversarial Safety Benchmark For Large Language Models In Chinese · The Large Language Model Bible Contribute to LLM-Bible

Sc-safety: A Multi-round Open-ended Question Adversarial Safety Benchmark For Large Language Models In Chinese

Xu Liang, Zhao Kangkang, Zhu Lei, Xue Hang. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Reinforcement Learning Responsible AI Security

Large language models (LLMs), like ChatGPT and GPT-4, have demonstrated remarkable abilities in natural language understanding and generation. However, alongside their positive impact on our daily tasks, they can also produce harmful content that negatively affects societal perceptions. To systematically assess the safety of Chinese LLMs, we introduce SuperCLUE-Safety (SC-Safety) - a multi-round adversarial benchmark with 4912 open-ended questions covering more than 20 safety sub-dimensions. Adversarial human-model interactions and conversations significantly increase the challenges compared to existing methods. Experiments on 13 major LLMs supporting Chinese yield the following insights: 1) Closed-source models outperform open-sourced ones in terms of safety; 2) Models released from China demonstrate comparable safety levels to LLMs like GPT-3.5-turbo; 3) Some smaller models with 6B-13B parameters can compete effectively in terms of safety. By introducing SC-Safety, we aim to promote collaborative efforts to create safer and more trustworthy LLMs. The benchmark and findings provide guidance on model selection. Our benchmark can be found at https://www.CLUEbenchmarks.com

Similar Work