"not Aligned" Is Not "malicious": Being Careful About Hallucinations Of Large Language Models' Jailbreak · The Large Language Model Bible Contribute to LLM-Bible

"not Aligned" Is Not "malicious": Being Careful About Hallucinations Of Large Language Models' Jailbreak

Mei Lingrui, Liu Shenghua, Wang Yiwei, Bi Baolong, Mao Jiayi, Cheng Xueqi. Arxiv 2024

[Paper]    
Prompting Responsible AI Tools Uncategorized

“Jailbreak” is a major safety concern of Large Language Models (LLMs), which occurs when malicious prompts lead LLMs to produce harmful outputs, raising issues about the reliability and safety of LLMs. Therefore, an effective evaluation of jailbreaks is very crucial to develop its mitigation strategies. However, our research reveals that many jailbreaks identified by current evaluations may actually be hallucinations-erroneous outputs that are mistaken for genuine safety breaches. This finding suggests that some perceived vulnerabilities might not represent actual threats, indicating a need for more precise red teaming benchmarks. To address this problem, we propose the \(\textbf{B}\)enchmark for reli\(\textbf{AB}\)ilit\(\textbf{Y}\) and jail\(\textbf{B}\)reak ha\(\textbf{L}\)l\(\textbf{U}\)cination \(\textbf{E}\)valuation (BabyBLUE). BabyBLUE introduces a specialized validation framework including various evaluators to enhance existing jailbreak benchmarks, ensuring outputs are useful malicious instructions. Additionally, BabyBLUE presents a new dataset as an augmentation to the existing red teaming benchmarks, specifically addressing hallucinations in jailbreaks, aiming to evaluate the true potential of jailbroken LLM outputs to cause harm to human society.

Similar Work