BMIKE-53: Investigating Cross-lingual Knowledge Editing With In-context Learning · The Large Language Model Bible Contribute to LLM-Bible

BMIKE-53: Investigating Cross-lingual Knowledge Editing With In-context Learning

Nie Ercong, Shao Bo, Ding Zifeng, Wang Mingyang, Schmid Helmut, Schütze Hinrich. Arxiv 2024

[Paper] [Code]    
Has Code In Context Learning Prompting Tools Training Techniques

Large language models (LLMs) possess extensive parametric knowledge, but this knowledge is difficult to update with new information because retraining is very expensive and infeasible for closed-source models. Knowledge editing (KE) has emerged as a viable solution for updating the knowledge of LLMs without compromising their overall performance. On-the-fly KE methods, inspired by in-context learning (ICL), have shown great promise and allow LLMs to be treated as black boxes. In the past, KE was primarily employed in English contexts, whereas the potential for cross-lingual KE in current English-centric LLMs has not been fully explored. To foster more research in this direction, we introduce the BMIKE-53 benchmark for evaluating cross-lingual KE on 53 diverse languages across three KE task types. We also propose a gradient-free KE method called Multilingual In-context Knowledge Editing (MIKE) and evaluate it on BMIKE-53. Our evaluation focuses on cross-lingual knowledge transfer in terms of reliability, generality, locality, and portability, offering valuable insights and a framework for future research in cross-lingual KE. Our code and data are publicly accessible via the anonymous repository at https://anonymous.4open.science/r/MIKE.

Similar Work