Backdoorllm: A Comprehensive Benchmark For Backdoor Attacks On Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Backdoorllm: A Comprehensive Benchmark For Backdoor Attacks On Large Language Models

Li Yige, Huang Hanxun, Zhao Yunhan, Ma Xingjun, Sun Jun. Arxiv 2024

[Paper] [Code]    
Applications Has Code Language Modeling Model Architecture Prompting Reinforcement Learning Responsible AI Security Training Techniques

Generative Large Language Models (LLMs) have made significant strides across various tasks, but they remain vulnerable to backdoor attacks, where specific triggers in the prompt cause the LLM to generate adversary-desired responses. While most backdoor research has focused on vision or text classification tasks, backdoor attacks in text generation have been largely overlooked. In this work, we introduce \textit{BackdoorLLM}, the first comprehensive benchmark for studying backdoor attacks on LLMs. \textit{BackdoorLLM} features: 1) a repository of backdoor benchmarks with a standardized training pipeline, 2) diverse attack strategies, including data poisoning, weight poisoning, hidden state attacks, and chain-of-thought attacks, 3) extensive evaluations with over 200 experiments on 8 attacks across 7 scenarios and 6 model architectures, and 4) key insights into the effectiveness and limitations of backdoors in LLMs. We hope \textit{BackdoorLLM} will raise awareness of backdoor threats and contribute to advancing AI safety. The code is available at \url{https://github.com/bboylyg/BackdoorLLM}.

Similar Work