Can Llms Be Fooled? Investigating Vulnerabilities In Llms · The Large Language Model Bible Contribute to LLM-Bible

Can Llms Be Fooled? Investigating Vulnerabilities In Llms

Abdali Sara, He Jia, Barberan Cj, Anarfi Richard. Arxiv 2024

[Paper]    
Applications Prompting Reinforcement Learning Security Training Techniques

The advent of Large Language Models (LLMs) has garnered significant popularity and wielded immense power across various domains within Natural Language Processing (NLP). While their capabilities are undeniably impressive, it is crucial to identify and scrutinize their vulnerabilities especially when those vulnerabilities can have costly consequences. One such LLM, trained to provide a concise summarization from medical documents could unequivocally leak personal patient data when prompted surreptitiously. This is just one of many unfortunate examples that have been unveiled and further research is necessary to comprehend the underlying reasons behind such vulnerabilities. In this study, we delve into multiple sections of vulnerabilities which are model-based, training-time, inference-time vulnerabilities, and discuss mitigation strategies including “Model Editing” which aims at modifying LLMs behavior, and “Chroma Teaming” which incorporates synergy of multiple teaming strategies to enhance LLMs’ resilience. This paper will synthesize the findings from each vulnerability section and propose new directions of research and development. By understanding the focal points of current vulnerabilities, we can better anticipate and mitigate future risks, paving the road for more robust and secure LLMs.

Similar Work