Roguegpt: Dis-ethical Tuning Transforms Chatgpt4 Into A Rogue AI In 158 Words · The Large Language Model Bible Contribute to LLM-Bible

Roguegpt: Dis-ethical Tuning Transforms Chatgpt4 Into A Rogue AI In 158 Words

Buscemi Alessio, Proverbio Daniele. Arxiv 2024

[Paper]    
Ethics And Bias Fine Tuning GPT Model Architecture Pretraining Methods Prompting Reinforcement Learning Training Techniques

The ethical implications and potentials for misuse of Generative Artificial Intelligence are increasingly worrying topics. This paper explores how easily the default ethical guardrails of ChatGPT, using its latest customization features, can be bypassed by simple prompts and fine-tuning, that can be effortlessly accessed by the broad public. This malevolently altered version of ChatGPT, nicknamed “RogueGPT”, responded with worrying behaviours, beyond those triggered by jailbreak prompts. We conduct an empirical study of RogueGPT responses, assessing its flexibility in answering questions pertaining to what should be disallowed usage. Our findings raise significant concerns about the model’s knowledge about topics like illegal drug production, torture methods and terrorism. The ease of driving ChatGPT astray, coupled with its global accessibility, highlights severe issues regarding the data quality used for training the foundational model and the implementation of ethical safeguards. We thus underline the responsibilities and dangers of user-driven modifications, and the broader effects that these may have on the design of safeguarding and ethical modules implemented by AI programmers.

Similar Work