Rapid Optimization For Jailbreaking Llms Via Subconscious Exploitation And Echopraxia · The Large Language Model Bible Contribute to LLM-Bible

Rapid Optimization For Jailbreaking Llms Via Subconscious Exploitation And Echopraxia

Shen Guangyu, Cheng Siyuan, Zhang Kaiyuan, Tao Guanhong, An Shengwei, Yan Lu, Zhang Zhuo, Ma Shiqing, Zhang Xiangyu. Arxiv 2024

[Paper] [Code]    
Attention Mechanism Efficiency And Optimization Has Code Model Architecture Prompting RAG Reinforcement Learning Responsible AI Security TACL Tools

Large Language Models (LLMs) have become prevalent across diverse sectors, transforming human life with their extraordinary reasoning and comprehension abilities. As they find increased use in sensitive tasks, safety concerns have gained widespread attention. Extensive efforts have been dedicated to aligning LLMs with human moral principles to ensure their safe deployment. Despite their potential, recent research indicates aligned LLMs are prone to specialized jailbreaking prompts that bypass safety measures to elicit violent and harmful content. The intrinsic discrete nature and substantial scale of contemporary LLMs pose significant challenges in automatically generating diverse, efficient, and potent jailbreaking prompts, representing a continuous obstacle. In this paper, we introduce RIPPLE (Rapid Optimization via Subconscious Exploitation and Echopraxia), a novel optimization-based method inspired by two psychological concepts: subconsciousness and echopraxia, which describe the processes of the mind that occur without conscious awareness and the involuntary mimicry of actions, respectively. Evaluations across 6 open-source LLMs and 4 commercial LLM APIs show RIPPLE achieves an average Attack Success Rate of 91.5%, outperforming five current methods by up to 47.0% with an 8x reduction in overhead. Furthermore, it displays significant transferability and stealth, successfully evading established detection mechanisms. The code of our work is available at \url{https://github.com/SolidShen/RIPPLE_official/tree/official}

Similar Work