Exploring Vulnerabilities And Protections In Large Language Models: A Survey · The Large Language Model Bible Contribute to LLM-Bible

Exploring Vulnerabilities And Protections In Large Language Models: A Survey

Liu Frank Weizhen, Hu Chenhui. Arxiv 2024

[Paper]    
Applications Prompting Reinforcement Learning Security Survey Paper Tools

As Large Language Models (LLMs) increasingly become key components in various AI applications, understanding their security vulnerabilities and the effectiveness of defense mechanisms is crucial. This survey examines the security challenges of LLMs, focusing on two main areas: Prompt Hacking and Adversarial Attacks, each with specific types of threats. Under Prompt Hacking, we explore Prompt Injection and Jailbreaking Attacks, discussing how they work, their potential impacts, and ways to mitigate them. Similarly, we analyze Adversarial Attacks, breaking them down into Data Poisoning Attacks and Backdoor Attacks. This structured examination helps us understand the relationships between these vulnerabilities and the defense strategies that can be implemented. The survey highlights these security challenges and discusses robust defensive frameworks to protect LLMs against these threats. By detailing these security issues, the survey contributes to the broader discussion on creating resilient AI systems that can resist sophisticated attacks.

Similar Work