Uncovering Safety Risks Of Large Language Models Through Concept Activation Vector · The Large Language Model Bible Contribute to LLM-Bible

Uncovering Safety Risks Of Large Language Models Through Concept Activation Vector

Xu Zhihao, Huang Ruixuan, Chen Changyu, Wang Shuai, Wang Xiting. Arxiv 2024

[Paper]    
GPT Model Architecture Prompting Responsible AI Security Tools Training Techniques Uncategorized

Despite careful safety alignment, current large language models (LLMs) remain vulnerable to various attacks. To further unveil the safety risks of LLMs, we introduce a Safety Concept Activation Vector (SCAV) framework, which effectively guides the attacks by accurately interpreting LLMs’ safety mechanisms. We then develop an SCAV-guided attack method that can generate both attack prompts and embedding-level attacks with automatically selected perturbation hyperparameters. Both automatic and human evaluations demonstrate that our attack method significantly improves the attack success rate and response quality while requiring less training data. Additionally, we find that our generated attack prompts may be transferable to GPT-4, and the embedding-level attacks may also be transferred to other white-box LLMs whose parameters are known. Our experiments further uncover the safety risks present in current LLMs. For example, we find that six out of seven open-source LLMs that we attack consistently provide relevant answers to more than 85% malicious instructions. Finally, we provide insights into the safety mechanism of LLMs.

Similar Work