Course-correction: Safety Alignment Using Synthetic Preferences · The Large Language Model Bible Contribute to LLM-Bible

Course-correction: Safety Alignment Using Synthetic Preferences

Xu Rongwu, Cai Yishuo, Zhou Zhenhong, Gu Renjie, Weng Haiqin, Liu Yan, Zhang Tianwei, Xu Wei, Qiu Han. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Reinforcement Learning Responsible AI Security Training Techniques

The risk of harmful content generated by large language models (LLMs) becomes a critical concern. This paper presents a systematic study on assessing and improving LLMs’ capability to perform the task of \textbf{course-correction}, \ie, the model can steer away from generating harmful content autonomously. To start with, we introduce the \textsc{C\(^2\)-Eval} benchmark for quantitative assessment and analyze 10 popular LLMs, revealing varying proficiency of current safety-tuned LLMs in course-correction. To improve, we propose fine-tuning LLMs with preference learning, emphasizing the preference for timely course-correction. Using an automated pipeline, we create \textsc{C\(^2\)-Syn}, a synthetic dataset with 750K pairwise preferences, to teach models the concept of timely course-correction through data-driven preference learning. Experiments on 2 LLMs, \textsc{Llama2-Chat 7B} and \textsc{Qwen2 7B}, show that our method effectively enhances course-correction skills without affecting general performance. Additionally, it effectively improves LLMs’ safety, particularly in resisting jailbreak attacks.

Similar Work