Assessing Adversarial Robustness Of Large Language Models: An Empirical Study · The Large Language Model Bible Contribute to LLM-Bible

Assessing Adversarial Robustness Of Large Language Models: An Empirical Study

Yang Zeyu, Meng Zhao, Zheng Xiaochen, Wattenhofer Roger. Arxiv 2024

[Paper]    
Applications Fine Tuning Pretraining Methods Reinforcement Learning Security Training Techniques

Large Language Models (LLMs) have revolutionized natural language processing, but their robustness against adversarial attacks remains a critical concern. We presents a novel white-box style attack approach that exposes vulnerabilities in leading open-source LLMs, including Llama, OPT, and T5. We assess the impact of model size, structure, and fine-tuning strategies on their resistance to adversarial perturbations. Our comprehensive evaluation across five diverse text classification tasks establishes a new benchmark for LLM robustness. The findings of this study have far-reaching implications for the reliable deployment of LLMs in real-world applications and contribute to the advancement of trustworthy AI systems.

Similar Work