Selective Forgetting: Advancing Machine Unlearning Techniques And Evaluation In Language Models · The Large Language Model Bible Contribute to LLM-Bible

Selective Forgetting: Advancing Machine Unlearning Techniques And Evaluation In Language Models

Wang Lingzhi, Zeng Xingshan, Guo Jinsong, Wong Kam-fai, Gottlob Georg. Arxiv 2024

[Paper]    
Efficiency And Optimization RAG Reinforcement Learning Tools Training Techniques

The aim of this study is to investigate Machine Unlearning (MU), a burgeoning field focused on addressing concerns related to neural models inadvertently retaining personal or sensitive data. Here, a novel approach is introduced to achieve precise and selective forgetting within language models. Unlike previous methodologies that adopt completely opposing training objectives, this approach aims to mitigate adverse effects on language model performance, particularly in generation tasks. Furthermore, two innovative evaluation metrics are proposed: Sensitive Information Extraction Likelihood (S-EL) and Sensitive Information Memory Accuracy (S-MA), designed to gauge the effectiveness of sensitive information elimination. To reinforce the forgetting framework, an effective method for annotating sensitive scopes is presented, involving both online and offline strategies. The online selection mechanism leverages language probability scores to ensure computational efficiency, while the offline annotation entails a robust two-stage process based on Large Language Models (LLMs).

Similar Work