Advancing Adversarial Suffix Transfer Learning On Aligned Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Advancing Adversarial Suffix Transfer Learning On Aligned Large Language Models

Liu Hongfu, Xie Yuxi, Wang Ye, Shieh Michael. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning RAG Reinforcement Learning Responsible AI Security Tools

Language Language Models (LLMs) face safety concerns due to potential misuse by malicious users. Recent red-teaming efforts have identified adversarial suffixes capable of jailbreaking LLMs using the gradient-based search algorithm Greedy Coordinate Gradient (GCG). However, GCG struggles with computational inefficiency, limiting further investigations regarding suffix transferability and scalability across models and data. In this work, we bridge the connection between search efficiency and suffix transferability. We propose a two-stage transfer learning framework, DeGCG, which decouples the search process into behavior-agnostic pre-searching and behavior-relevant post-searching. Specifically, we employ direct first target token optimization in pre-searching to facilitate the search process. We apply our approach to cross-model, cross-data, and self-transfer scenarios. Furthermore, we introduce an interleaved variant of our approach, i-DeGCG, which iteratively leverages self-transferability to accelerate the search process. Experiments on HarmBench demonstrate the efficiency of our approach across various models and domains. Notably, our i-DeGCG outperforms the baseline on Llama2-chat-7b with ASRs of \(43.9\) (\(+22.2\)) and \(39.0\) (\(+19.5\)) on valid and test sets, respectively. Further analysis on cross-model transfer indicates the pivotal role of first target token optimization in leveraging suffix transferability for efficient searching.

Similar Work