Badgpt: Exploring Security Vulnerabilities Of Chatgpt Via Backdoor Attacks To Instructgpt · The Large Language Model Bible Contribute to LLM-Bible

Badgpt: Exploring Security Vulnerabilities Of Chatgpt Via Backdoor Attacks To Instructgpt

Shi Jiawen, Liu Yixin, Zhou Pan, Sun Lichao. Arxiv 2023

[Paper]    
Agentic Attention Mechanism Fine Tuning GPT Model Architecture Pretraining Methods Reinforcement Learning Security Training Techniques

Recently, ChatGPT has gained significant attention in research due to its ability to interact with humans effectively. The core idea behind this model is reinforcement learning (RL) fine-tuning, a new paradigm that allows language models to align with human preferences, i.e., InstructGPT. In this study, we propose BadGPT, the first backdoor attack against RL fine-tuning in language models. By injecting a backdoor into the reward model, the language model can be compromised during the fine-tuning stage. Our initial experiments on movie reviews, i.e., IMDB, demonstrate that an attacker can manipulate the generated text through BadGPT.

Similar Work