Assessing Prompt Injection Risks In 200+ Custom Gpts · The Large Language Model Bible Contribute to LLM-Bible

Assessing Prompt Injection Risks In 200+ Custom Gpts

Yu Jiahao, Wu Yuhang, Shu Dong, Jin Mingyu, Yang Sabrina, Xing Xinyu. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Prompting Security Tools

In the rapidly evolving landscape of artificial intelligence, ChatGPT has been widely used in various applications. The new feature - customization of ChatGPT models by users to cater to specific needs has opened new frontiers in AI utility. However, this study reveals a significant security vulnerability inherent in these user-customized GPTs: prompt injection attacks. Through comprehensive testing of over 200 user-designed GPT models via adversarial prompts, we demonstrate that these systems are susceptible to prompt injections. Through prompt injection, an adversary can not only extract the customized system prompts but also access the uploaded files. This paper provides a first-hand analysis of the prompt injection, alongside the evaluation of the possible mitigation of such attacks. Our findings underscore the urgent need for robust security frameworks in the design and deployment of customizable GPT models. The intent of this paper is to raise awareness and prompt action in the AI community, ensuring that the benefits of GPT customization do not come at the cost of compromised security and privacy.

Similar Work