Large Language Models Are Vulnerable To Bait-and-switch Attacks For Generating Harmful Content · The Large Language Model Bible Contribute to LLM-Bible

Large Language Models Are Vulnerable To Bait-and-switch Attacks For Generating Harmful Content

Bianchi Federico, Zou James. Arxiv 2024

[Paper]    
Prompting Responsible AI Security Uncategorized

The risks derived from large language models (LLMs) generating deceptive and damaging content have been the subject of considerable research, but even safe generations can lead to problematic downstream impacts. In our study, we shift the focus to how even safe text coming from LLMs can be easily turned into potentially dangerous content through Bait-and-Switch attacks. In such attacks, the user first prompts LLMs with safe questions and then employs a simple find-and-replace post-hoc technique to manipulate the outputs into harmful narratives. The alarming efficacy of this approach in generating toxic content highlights a significant challenge in developing reliable safety guardrails for LLMs. In particular, we stress that focusing on the safety of the verbatim LLM outputs is insufficient and that we also need to consider post-hoc transformations.

Similar Work