How Alignment And Jailbreak Work: Explain LLM Safety Through Intermediate Hidden States · The Large Language Model Bible Contribute to LLM-Bible

How Alignment And Jailbreak Work: Explain LLM Safety Through Intermediate Hidden States

Zhou Zhenhong, Yu Haiyang, Zhang Xinghua, Xu Rongwu, Huang Fei, Li Yongbin. Arxiv 2024

[Paper] [Code]    
Has Code Interpretability And Explainability Reinforcement Learning Responsible AI Training Techniques Uncategorized

Large language models (LLMs) rely on safety alignment to avoid responding to malicious user inputs. Unfortunately, jailbreak can circumvent safety guardrails, resulting in LLMs generating harmful content and raising concerns about LLM safety. Due to language models with intensive parameters often regarded as black boxes, the mechanisms of alignment and jailbreak are challenging to elucidate. In this paper, we employ weak classifiers to explain LLM safety through the intermediate hidden states. We first confirm that LLMs learn ethical concepts during pre-training rather than alignment and can identify malicious and normal inputs in the early layers. Alignment actually associates the early concepts with emotion guesses in the middle layers and then refines them to the specific reject tokens for safe generations. Jailbreak disturbs the transformation of early unethical classification into negative emotions. We conduct experiments on models from 7B to 70B across various model families to prove our conclusion. Overall, our paper indicates the intrinsical mechanism of LLM safety and how jailbreaks circumvent safety guardrails, offering a new perspective on LLM safety and reducing concerns. Our code is available at https://github.com/ydyjya/LLM-IHS-Explanation.

Similar Work