Attacks, Defenses And Evaluations For LLM Conversation Safety: A Survey · The Large Language Model Bible Contribute to LLM-Bible

Attacks, Defenses And Evaluations For LLM Conversation Safety: A Survey

Dong Zhichen, Zhou Zhanhui, Yang Chao, Shao Jing, Qiao Yu. Arxiv 2024

[Paper] [Code]    
Applications Has Code RAG Responsible AI Security Survey Paper Uncategorized

Large Language Models (LLMs) are now commonplace in conversation applications. However, their risks of misuse for generating harmful responses have raised serious societal concerns and spurred recent research on LLM conversation safety. Therefore, in this survey, we provide a comprehensive overview of recent studies, covering three critical aspects of LLM conversation safety: attacks, defenses, and evaluations. Our goal is to provide a structured summary that enhances understanding of LLM conversation safety and encourages further investigation into this important subject. For easy reference, we have categorized all the studies mentioned in this survey according to our taxonomy, available at: https://github.com/niconi19/LLM-conversation-safety.

Similar Work