SOUL: Unlocking The Power Of Second-order Optimization For LLM Unlearning · The Large Language Model Bible Contribute to LLM-Bible

SOUL: Unlocking The Power Of Second-order Optimization For LLM Unlearning

Jia Jinghan, Zhang Yihua, Zhang Yimeng, Liu Jiancheng, Runwal Bharat, Diffenderfer James, Kailkhura Bhavya, Liu Sijia. Arxiv 2024

[Paper] [Code]    
Efficiency And Optimization Has Code Tools Uncategorized

Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities without compromising utility beyond the scope of unlearning. While interest in studying LLM unlearning is growing, the impact of the optimizer choice for LLM unlearning remains unexplored. In this work, we shed light on the significance of optimizer selection in LLM unlearning for the first time, establishing a clear connection between second-order optimization and influence unlearning (a classical approach using influence functions to update the model for data influence removal). This insight propels us to develop a second-order optimization-based LLM unlearning framework, termed Second-Order UnLearning (SOUL), which extends the static, one-shot model update using influence unlearning to a dynamic, iterative unlearning process. Our extensive experiments show that SOUL consistently outperforms conventional first-order methods across various unlearning tasks, models, and metrics, indicating that second-order optimization offers an effective and broadly applicable solution for LLM unlearning. Codes are available at https://github.com/OPTML-Group/SOUL.

Similar Work