Smoothllm: Defending Large Language Models Against Jailbreaking Attacks · The Large Language Model Bible Contribute to LLM-Bible

Smoothllm: Defending Large Language Models Against Jailbreaking Attacks

Robey Alexander, Wong Eric, Hassani Hamed, Pappas George J.. Arxiv 2023

[Paper] [Code]    
GPT Has Code Model Architecture Prompting Reinforcement Learning Security Uncategorized

Despite efforts to align large language models (LLMs) with human intentions, widely-used LLMs such as GPT, Llama, and Claude are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content. To address this vulnerability, we propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks. Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs. Across a range of popular LLMs, SmoothLLM sets the state-of-the-art for robustness against the GCG, PAIR, RandomSearch, and AmpleGCG jailbreaks. SmoothLLM is also resistant against adaptive GCG attacks, exhibits a small, though non-negligible trade-off between robustness and nominal performance, and is compatible with any LLM. Our code is publicly available at \url{https://github.com/arobey1/smooth-llm}.

Similar Work