Breaking Down The Defenses: A Comparative Survey Of Attacks On Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Breaking Down The Defenses: A Comparative Survey Of Attacks On Large Language Models

Chowdhury Arijit Ghosh, Islam Md Mofijul, Kumar Vaibhav, Shezan Faysal Hossain, Kumar Vaibhav, Jain Vinija, Chadha Aman. Arxiv 2024

[Paper]    
Attention Mechanism Model Architecture Security Survey Paper Training Techniques

Large Language Models (LLMs) have become a cornerstone in the field of Natural Language Processing (NLP), offering transformative capabilities in understanding and generating human-like text. However, with their rising prominence, the security and vulnerability aspects of these models have garnered significant attention. This paper presents a comprehensive survey of the various forms of attacks targeting LLMs, discussing the nature and mechanisms of these attacks, their potential impacts, and current defense strategies. We delve into topics such as adversarial attacks that aim to manipulate model outputs, data poisoning that affects model training, and privacy concerns related to training data exploitation. The paper also explores the effectiveness of different attack methodologies, the resilience of LLMs against these attacks, and the implications for model integrity and user trust. By examining the latest research, we provide insights into the current landscape of LLM vulnerabilities and defense mechanisms. Our objective is to offer a nuanced understanding of LLM attacks, foster awareness within the AI community, and inspire robust solutions to mitigate these risks in future developments.

Similar Work