Imposter.ai: Adversarial Attacks With Hidden Intentions Towards Aligned Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Imposter.ai: Adversarial Attacks With Hidden Intentions Towards Aligned Large Language Models

Liu Xiao, Li Liangzhi, Xiang Tong, Ye Fuying, Wei Lu, Li Wangyue, Garcia Noa. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Prompting Reinforcement Learning Responsible AI Security Tools Uncategorized

With the development of large language models (LLMs) like ChatGPT, both their vast applications and potential vulnerabilities have come to the forefront. While developers have integrated multiple safety mechanisms to mitigate their misuse, a risk remains, particularly when models encounter adversarial inputs. This study unveils an attack mechanism that capitalizes on human conversation strategies to extract harmful information from LLMs. We delineate three pivotal strategies: (i) decomposing malicious questions into seemingly innocent sub-questions; (ii) rewriting overtly malicious questions into more covert, benign-sounding ones; (iii) enhancing the harmfulness of responses by prompting models for illustrative examples. Unlike conventional methods that target explicit malicious responses, our approach delves deeper into the nature of the information provided in responses. Through our experiments conducted on GPT-3.5-turbo, GPT-4, and Llama2, our method has demonstrated a marked efficacy compared to conventional attack methods. In summary, this work introduces a novel attack method that outperforms previous approaches, raising an important question: How to discern whether the ultimate intent in a dialogue is malicious?

Similar Work