Unmemorization In Large Language Models Via Self-distillation And Deliberate Imagination · The Large Language Model Bible Contribute to LLM-Bible

Unmemorization In Large Language Models Via Self-distillation And Deliberate Imagination

Dong Yijiang River, Lin Hongzhou, Belkin Mikhail, Huerta Ramon, Vulić Ivan. Arxiv 2024

[Paper]    
Applications Distillation Efficiency And Optimization Fine Tuning Pretraining Methods Tools Training Techniques

While displaying impressive generation capabilities across many tasks, Large Language Models (LLMs) still struggle with crucial issues of privacy violation and unwanted exposure of sensitive data. This raises an essential question: how should we prevent such undesired behavior of LLMs while maintaining their strong generation and natural language understanding (NLU) capabilities? In this work, we introduce a novel approach termed deliberate imagination in the context of LLM unlearning. Instead of trying to forget memorized data, we employ a self-distillation framework, guiding LLMs to deliberately imagine alternative scenarios. As demonstrated in a wide range of experiments, the proposed method not only effectively unlearns targeted text but also preserves the LLMs’ capabilities in open-ended generation tasks as well as in NLU tasks. Our results demonstrate the usefulness of this approach across different models and sizes, and also with parameter-efficient fine-tuning, offering a novel pathway to addressing the challenges with private and sensitive data in LLM applications.

Similar Work