Llmguard: Guarding Against Unsafe LLM Behavior · The Large Language Model Bible Contribute to LLM-Bible

Llmguard: Guarding Against Unsafe LLM Behavior

Goyal Shubh, Hira Medha, Mishra Shubham, Goyal Sukriti, Goel Arnav, Dadu Niharika, Db Kirushikesh, Mehta Sameep, Madaan Nishtha. Arxiv 2024

[Paper]    
Ethics And Bias Reinforcement Learning

Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we present “LLMGuard”, a tool that monitors user interactions with an LLM application and flags content against specific behaviours or conversation topics. To do this robustly, LLMGuard employs an ensemble of detectors.

Similar Work