Can't Say Cant? Measuring And Reasoning Of Dark Jargons In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Can't Say Cant? Measuring And Reasoning Of Dark Jargons In Large Language Models

Ji Xu, Zhang Jianyi, Zhou Ziyin, Zhao Zhangchi, Qiao Qianqian, Han Kaiying, Hossen Md Imran, Hei Xiali. Arxiv 2024

[Paper] [Code]    
Fine Tuning GPT Has Code Merging Model Architecture Pretraining Methods Prompting Tools Training Techniques

Ensuring the resilience of Large Language Models (LLMs) against malicious exploitation is paramount, with recent focus on mitigating offensive responses. Yet, the understanding of cant or dark jargon remains unexplored. This paper introduces a domain-specific Cant dataset and CantCounter evaluation framework, employing Fine-Tuning, Co-Tuning, Data-Diffusion, and Data-Analysis stages. Experiments reveal LLMs, including ChatGPT, are susceptible to cant bypassing filters, with varying recognition accuracy influenced by question types, setups, and prompt clues. Updated models exhibit higher acceptance rates for cant queries. Moreover, LLM reactions differ across domains, e.g., reluctance to engage in racism versus LGBT topics. These findings underscore LLMs’ understanding of cant and reflect training data characteristics and vendor approaches to sensitive topics. Additionally, we assess LLMs’ ability to demonstrate reasoning capabilities. Access to our datasets and code is available at https://github.com/cistineup/CantCounter.

Similar Work