Guardrail Baselines For Unlearning In Llms · The Large Language Model Bible Contribute to LLM-Bible

Guardrail Baselines For Unlearning In Llms

Thaker Pratiksha, Maurya Yash, Hu Shengyuan, Wu Zhiwei Steven, Smith Virginia. Arxiv 2024

[Paper]    
Prompting Uncategorized

Recent work has demonstrated that finetuning is a promising approach to ‘unlearn’ concepts from large language models. However, finetuning can be expensive, as it requires both generating a set of examples and running iterations of finetuning to update the model. In this work, we show that simple guardrail-based approaches such as prompting and filtering can achieve unlearning results comparable to finetuning. We recommend that researchers investigate these lightweight baselines when evaluating the performance of more computationally intensive finetuning methods. While we do not claim that methods such as prompting or filtering are universal solutions to the problem of unlearning, our work suggests the need for evaluation metrics that can better separate the power of guardrails vs. finetuning, and highlights scenarios where guardrails expose possible unintended behavior in existing metrics and benchmarks.

Similar Work