Self-distillation Bridges Distribution Gap In Language Model Fine-tuning · The Large Language Model Bible Contribute to LLM-Bible

Self-distillation Bridges Distribution Gap In Language Model Fine-tuning

Yang Zhaorui, Pang Tianyu, Feng Haozhe, Wang Han, Chen Wei, Zhu Minfeng, Liu Qian. Arxiv 2024

[Paper] [Code]    
Distillation Efficiency And Optimization Fine Tuning Has Code Pretraining Methods Reinforcement Learning Responsible AI Training Techniques

The surge in Large Language Models (LLMs) has revolutionized natural language processing, but fine-tuning them for specific tasks often encounters challenges in balancing performance and preserving general instruction-following abilities. In this paper, we posit that the distribution gap between task datasets and the LLMs serves as the primary underlying cause. To address the problem, we introduce Self-Distillation Fine-Tuning (SDFT), a novel approach that bridges the distribution gap by guiding fine-tuning with a distilled dataset generated by the model itself to match its original distribution. Experimental results on the Llama-2-chat model across various benchmarks demonstrate that SDFT effectively mitigates catastrophic forgetting while achieving comparable or superior performance on downstream tasks compared to the vanilla fine-tuning. Moreover, SDFT demonstrates the potential to maintain the helpfulness and safety alignment of LLMs. Our code is available at https://github.com/sail-sg/sdft.

Similar Work