SAFETY-J: Evaluating Safety With Critique · The Large Language Model Bible Contribute to LLM-Bible

SAFETY-J: Evaluating Safety With Critique

Liu Yixiu, Zheng Yuxiang, Xia Shijie, Li Jiajun, Tu Yi, Song Chaoling, Liu Pengfei. Arxiv 2024

[Paper] [Code]    
Ethics And Bias Has Code Interpretability And Explainability Reinforcement Learning Responsible AI Training Techniques

The deployment of Large Language Models (LLMs) in content generation raises significant safety concerns, particularly regarding the transparency and interpretability of content evaluations. Current methods, primarily focused on binary safety classifications, lack mechanisms for detailed critique, limiting their utility for model improvement and user trust. To address these limitations, we introduce SAFETY-J, a bilingual generative safety evaluator for English and Chinese with critique-based judgment. SAFETY-J utilizes a robust training dataset that includes diverse dialogues and augmented query-response pairs to assess safety across various scenarios comprehensively. We establish an automated meta-evaluation benchmark that objectively assesses the quality of critiques with minimal human intervention, facilitating scalable and continuous improvement. Additionally, SAFETY-J employs an iterative preference learning technique to dynamically refine safety assessments based on meta-evaluations and critiques. Our evaluations demonstrate that SAFETY-J provides more nuanced and accurate safety evaluations, thereby enhancing both critique quality and predictive reliability in complex content scenarios. To facilitate further research and application, we open-source SAFETY-J’s training protocols, datasets, and code at https://github.com/GAIR-NLP/Safety-J.

Similar Work