A Cross-language Investigation Into Jailbreak Attacks In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

A Cross-language Investigation Into Jailbreak Attacks In Large Language Models

Li Jie, Liu Yi, Liu Chongyang, Shi Ling, Ren Xiaoning, Zheng Yaowen, Liu Yang, Xue Yinxing. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Interpretability And Explainability Language Modeling Model Architecture Pretraining Methods Reinforcement Learning Responsible AI Security Training Techniques

Large Language Models (LLMs) have become increasingly popular for their advanced text generation capabilities across various domains. However, like any software, they face security challenges, including the risk of ‘jailbreak’ attacks that manipulate LLMs to produce prohibited content. A particularly underexplored area is the Multilingual Jailbreak attack, where malicious questions are translated into various languages to evade safety filters. Currently, there is a lack of comprehensive empirical studies addressing this specific threat. To address this research gap, we conducted an extensive empirical study on Multilingual Jailbreak attacks. We developed a novel semantic-preserving algorithm to create a multilingual jailbreak dataset and conducted an exhaustive evaluation on both widely-used open-source and commercial LLMs, including GPT-4 and LLaMa. Additionally, we performed interpretability analysis to uncover patterns in Multilingual Jailbreak attacks and implemented a fine-tuning mitigation method. Our findings reveal that our mitigation strategy significantly enhances model defense, reducing the attack success rate by 96.2%. This study provides valuable insights into understanding and mitigating Multilingual Jailbreak attacks.

Similar Work