See The Unseen: Better Context-consistent Knowledge-editing By Noises · The Large Language Model Bible Contribute to LLM-Bible

See The Unseen: Better Context-consistent Knowledge-editing By Noises

Huang Youcheng, Lei Wenqiang, Zhang Zheng, Lv Jiancheng, Yan Shuicheng. Arxiv 2024

[Paper]    
Fine Tuning Interpretability And Explainability Pretraining Methods Training Techniques

Knowledge-editing updates knowledge of large language models (LLMs) and contributes to the interpretability and application of LLMs. However, knowledge applying is context-consistent: LLMs can recall the same knowledge in different contexts. Existing works ignore this property and the editing lacks generalization. In this paper, we empirically find that the effects of different contexts upon LLMs in recalling the same knowledge follow a Gaussian-like distribution. We then sample Gaussian noises to simulate the effects of different contexts when updating LLMs. By such, we can make LLMs see the unseen contexts where the edited knowledge will be applied, therefore improving the editing generalization. Experimental results on three LLMs demonstrate the effectiveness of our methods and also distinguish our methods from the others of fine-tuning LLMs by noises.

Similar Work