Ignore Previous Prompt: Attack Techniques For Language Models · The Large Language Model Bible Contribute to LLM-Bible

Ignore Previous Prompt: Attack Techniques For Language Models

Perez Fábio, Ribeiro Ian. Arxiv 2022

[Paper] [Code]    
Agentic Applications GPT Has Code Merging Model Architecture Pretraining Methods Prompting Security Tools Transformer

Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks – goal hijacking and prompt leaking – and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3’s stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.

Similar Work