Home

Awesome

Knowledge Editing for LLMs Papers

Awesome License: MIT

Must-read papers on knowledge editing for large language models.

🔔 News

<!-- - **2024-02-20 The AAAI2024 tutorial "*Knowledge Editing for Large Language Models*" has been canceled since speakers cannot present in person, we make this ppt[[Github](https://github.com/zjunlp/KnowledgeEditingPapers/blob/main/AAAI2024%40Tutorial_Knowledge%20Editing%20for%20LLMs.pdf)] [[Google Drive](https://drive.google.com/file/d/1fkTbVeRJSWmU7fBDeNf1OhHEkLSofQde/view?usp=sharing)] [[Baidu Pan](https://pan.baidu.com/s/1oJYgaMnxWIBE4kIcJuMSKg?pwd=p9j5)] available to the community**. -->

🔍 Contents


🌟 Why Knowledge Editing?

Knowledge Editing is a compelling field of research that focuses on facilitating efficient modifications to the behavior of models, particularly foundation models. The aim is to implement these changes within a specified scope of interest without negatively affecting the model's performance across a broader range of inputs.

Keywords

Knowledge Editing has strong connections with following topics.

<div align=center><img src="./img/ke.png" width="100%" height="80%" /></div>

Comparisons of different technologies

<div align=center><img src="./img/comparison.png" width="60%" height="48%" /></div>

📜 Resources

This is a collection of research and review papers of Knowledge Editing. Any suggestions and pull requests are welcome for better sharing of latest research progress.

Tutorials

Knowledge Editing for Large Language Models, AAAI 2024 Tutorial <br /> Ningyu Zhang, Jia-Chen Gu, Yunzhi Yao, Zhen Bi, Shumin Deng. [Github] [Google Drive] [Baidu Pan]

Editing Large Language Models, AACL 2023 Tutorial <br /> Ningyu Zhang, Yunzhi Yao, Shumin Deng. [Github] [Google Drive] [Baidu Pan]

Surveys

Knowledge Mechanisms in Large Language Models: A Survey and Perspective (EMNLP 2024 Findings) <br /> Mengru Wang, Yunzhi Yao, Ziwen Xu, Shuofei Qiao, Shumin Deng, Peng Wang, Xiang Chen, Jia-Chen Gu, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen, Ningyu Zhang. [paper]

A Comprehensive Study of Knowledge Editing for Large Language Models <br /> Ningyu Zhang, Yunzhi Yao, Bozhong Tian, Peng Wang, Shumin Deng, Mengru Wang, Zekun Xi, Shengyu Mao, Jintian Zhang, Yuansheng Ni, Siyuan Cheng, Ziwen Xu, Xin Xu, Jia-Chen Gu, Yong Jiang, Pengjun Xie, Fei Huang, Lei Liang, Zhiqiang Zhang, Xiaowei Zhu, Jun Zhou, Huajun Chen. [paper][benchmark][code]

Editing Large Language Models: Problems, Methods, and Opportunities, EMNLP 2023 Main Conference Paper <br /> Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, Ningyu Zhang. [paper][code]

Knowledge Editing for Large Language Models: A Survey <br /> Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, Jundong Li. [paper]

A Survey on Knowledge Editing of Neural Networks <br /> Vittorio Mazzia, Alessandro Pedrani, Andrea Caciolai, Kay Rottmann, Davide Bernardi. [paper]

Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges <br /> Nianwen Si, Hao Zhang, Heyu Chang, Wenlin Zhang, Dan Qu, Weiqiang Zhang. [paper]

<div align=center><img src="./img/overview.jpg" width="100%" height="80%" /></div>

Methods

Preserve Parameters

Memory-based
  1. Memory-Based Model Editing at Scale (ICML 2022) <br /> Eric Mitchell, Charles Lin, Antoine Bosselut, Christopher D. Manning, Chelsea Finn. [paper] [code] [demo]

  2. Fixing Model Bugs with Natural Language Patches. (EMNLP 2022) <br /> Shikhar Murty, Christopher D. Manning, Scott M. Lundberg, Marco Túlio Ribeiro. [paper] [code]

  3. MemPrompt: Memory-assisted Prompt Editing with User Feedback. (EMNLP 2022) <br /> Aman Madaan, Niket Tandon, Peter Clark, Yiming Yang. [paper] [code] [page] [video]

  4. Large Language Models with Controllable Working Memory. <br /> Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, Sanjiv Kumar. [paper]

  5. Can We Edit Factual Knowledge by In-Context Learning? <br /> Ce Zheng, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang. [paper]

  6. Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge <br /> Yasumasa Onoe, Michael J.Q. Zhang, Shankar Padmanabhan, Greg Durrett, Eunsol Choi. [paper]

  7. MQUAKE: Assessing Knowledge Editing inLanguage Models via Multi-Hop Questions <br> Zexuan Zhong, Zhengxuan Wu, Christopher D. Manning, Christopher Potts, Danqi Chen.<br />[paper] [code]

  8. PokeMQA: Programmable knowledge editing for Multi-hop Question Answering <br> Hengrui Gu, Kaixiong Zhou, Xiaotian Han, Ninghao Liu, Ruobing Wang, Xin Wang. <br /> [paper] [code]

  9. Retrieval-augmented Multilingual Knowledge Editing <br> Weixuan Wang, Barry Haddow, Alexandra Birch. [paper] [code]

  10. MEMORYLLM: Towards Self-Updatable Large Language Models <br> Yu Wang, Xiusi Chen, Jingbo Shang, Julian McAuley. [paper]

  11. DeepEdit: Knowledge Editing as Decoding with Constraints <br> Yiwei Wang,Muhao Chen,Nanyun Peng, Kai-Wei Chang. [paper]

  12. Stable Knowledge Editing in Large Language Models. <br /> Zihao Wei,Liang Pang,Hanxing Ding,Jingcheng Deng,Huawei Shen,Xueqi Cheng. [paper]

  13. Knowledge Editing on Black-box Large Language Models. <br /> Xiaoshuai Song, Zhengyang Wang, Keqing He, Guanting Dong, Jinxu Zhao, Weiran Xu. [paper]

  14. Learning to Edit: Aligning LLMs with Knowledge Editing. <br /> Yuxin Jiang, Yufei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, Qun Liu, Wei Wang. [paper]

  15. Robust and Scalable Model Editing for Large Language Models. <br /> Yingfa Chen, Zhengyan Zhang, Xu Han, Chaojun Xiao, Zhiyuan Liu, Chen Chen, Kuai Li, Tao Yang, Maosong Sun. [paper]

  16. Retrieval-Enhanced Knowledge Editing for Multi-Hop Question Answering in Language Models. <br /> Yucheng Shi, Qiaoyu Tan, Xuansheng Wu, Shaochen Zhong, Kaixiong Zhou, Ninghao Liu. [paper]

  17. In-Context Editing: Learning Knowledge from Self-Induced Distributions. <br /> Siyuan Qi, Bangcheng Yang, Kailin Jiang, Xiaobo Wang, Jiaqi Li, Yifan Zhong, Yaodong Yang, Zilong Zheng. [paper]

  18. Cross-Lingual Multi-Hop Knowledge Editing. <br /> Aditi Khandelwal, Harman Singh, Hengrui Gu, Tianlong Chen, Kaixiong Zhou. [paper]

Additional Parameters
  1. Calibrating Factual Knowledge in Pretrained Language Models. (EMNLP 2022) <br /> Qingxiu Dong, Damai Dai, Yifan Song, Jingjing Xu, Zhifang Sui, Lei Li. [paper] [code]

  2. Transformer-Patcher: One Mistake worth One Neuron. (ICLR 2023) <br /> Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, Zhang Xiong. [paper] [code]

  3. Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors. (NeurIPS 2023) <br /> Thomas Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim, Marzyeh Ghassemi. [paper] [code]

  4. Neural Knowledge Bank for Pretrained Transformers <br /> Damai Dai, Wenbin Jiang, Qingxiu Dong, Yajuan Lyu, Qiaoqiao She, Zhifang Sui. [paper]

  5. Rank-One Editing of Encoder-Decoder Models <br /> Vikas Raunak, Arul Menezes. [paper]

  6. MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA. (AAAI 2024) <br /> Lang Yu, Qin Chen, Jie Zhou, Liang He. [paper] [code]

  7. MPN: Leveraging Multilingual Patch Neuron for Cross-lingual Model Editing <br /> Nianwen Si, Hao Zhang, Weiqiang Zhang. [paper]

  8. SWEA: Changing Factual Knowledge in Large Language Models via Subject Word Embedding Altering <br /> Xiaopeng Li, Shasha Li, Bin Ji, Shezheng Song. [paper]

  9. MEMoE: Enhancing Model Editing with Mixture of Experts Adaptors <br /> Renzhi Wang, Piji Li. [paper]

  10. WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models. (NeurIPS 2024) <br /> Peng Wang, Zexi Li, Ningyu Zhang, Ziwen Xu, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen. [paper]

  11. MEMLA: Enhancing Multilingual Knowledge Editing with Neuron-Masked Low-Rank Adaptation. <br /> Jiakuan Xie, Pengfei Cao, Yuheng Chen, Yubo Chen, Kang Liu, Jun Zhao. [paper]

Change LM's representation space
  1. Inspecting and Editing Knowledge Representations in Language Models <br /> Evan Hernandez, Belinda Z. Li, Jacob Andreas. [paper] [code]

Modify Parameters

Finetuning
  1. Plug-and-Play Adaptation for Continuously-updated QA. (ACL 2022 Findings) <br /> Kyungjae Lee, Wookje Han, Seung-won Hwang, Hwaran Lee, Joonsuk Park, Sang-Woo Lee. [paper] [code]

  2. Modifying Memories in Transformer Models. <br /> Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix Yu, Sanjiv Kumar. [paper]

  3. Forgetting before Learning: Utilizing Parametric Arithmetic for Knowledge Updating in Large Language Models <br /> Shiwen Ni, Dingwei Chen, Chengming Li, Xiping Hu, Ruifeng Xu and Min Yang. [paper]

  4. LLM Surgery: Efficient Knowledge Unlearning and Editing in Large Language Models <br /> Akshaj Kumar Veldanda, Shi-Xiong Zhang, Anirban Das, Supriyo Chakraborty, Stephen Rawls, Sambit Sahu, Milind Naphade. [paper]

Meta-learning
  1. Editing Factual Knowledge in Language Models. (EMNLP 2021) <br /> Nicola De Cao, Wilker Aziz, Ivan Titov. [paper] [code]

  2. Fast Model Editing at Scale. (ICLR 2022) <br /> Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, Christopher D. Manning. [paper] [code] [page]

  3. Editable Neural Networks. (ICLR 2020) <br /> Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitry V. Pyrkin, Sergei Popov, Artem Babenko. [paper] [code]

  4. Editing Language Model-based Knowledge Graph Embeddings (AAAI 2024) <br /> Siyuan Cheng, Ningyu Zhang, Bozhong Tian, Xi Chen, Qingbing Liu, Huajun Chen. [paper] [code]

  5. Massive Editing for Large Language Model via Meta Learning. (ICLR 2024) <br /> Chenmien Tan1, Ge Zhang, Jie Fu. [paper] [code]

Locate and edit
  1. Editing a classifier by rewriting its prediction rules. (NeurIPS 2021) <br /> Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, Aleksander Madry. [paper] [code]

  2. Language Anisotropic Cross-Lingual Model Editing. <br /> Yang Xu, Yutai Hou, Wanxiang Che. [paper]

  3. Repairing Neural Networks by Leaving the Right Past Behind. <br /> Ryutaro Tanno, Melanie F. Pradier, Aditya Nori, Yingzhen Li. [paper]

  4. Locating and Editing Factual Associations in GPT. (NeurIPS 2022) <br /> Kevin Meng, David Bau, Alex Andonian, Yonatan Belinkov. [paper] [code] [page] [video]

  5. Mass-Editing Memory in a Transformer. (ICLR 2023) <br /> Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, David Bau. [paper] [code] [page] [demo]

  6. Editing models with task arithmetic. (ICLR 2023) <br /> Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Hajishirzi, Ali Farhadi. [paper]

  7. Editing Common Sense in Transformers. (EMNLP 2023) <br /> Anshita Gupta, Debanjan Mondal, Akshay Krishna Sheshadri, Wenlong Zhao, Xiang Lorraine Li, Sarah Wiegreffe, Niket Tandon. [paper]

  8. Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs. (EACL 2023) <br /> Peter Hase, Mona Diab, Asli Celikyilmaz, Xian Li, Zornitsa Kozareva, Veselin Stoyanov, Mohit Bansal, Srinivasan Iyer. [paper] [code]

  9. Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark. (ACL 2023 Findings)<br /> Jason Hoelscher-Obermaier, Julia Persson, Esben Kran, Ioannis Konstas, Fazl Barez. [paper]

  10. Knowledge Neurons in Pretrained Transformers.(ACL 2022) <br /> Damai Dai , Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, Furu Wei.[paper] [code] [code by EleutherAI]

  11. LEACE: Perfect linear concept erasure in closed form .<br /> Nora Belrose, David Schneider-Joseph, Shauli Ravfogel, Ryan Cotterell, Edward Raff, Stella Biderman. [paper]

  12. Transformer Feed-Forward Layers Are Key-Value Memories. (EMNLP 2021) <br /> Mor Geva, Roei Schuster, Jonathan Berant, Omer Levy. [paper]

  13. Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space.(EMNLP 2022) <br /> Mor Geva, Avi Caciularu, Kevin Ro Wang, Yoav Goldberg. [paper]

  14. PMET: Precise Model Editing in a Transformer. (AAAI 2024) <br /> Xiaopeng Li, Shasha Li, Shezheng Song, Jing Yang, Jun Ma, Jie Yu. [paper] [code]

  15. Unlearning Bias in Language Models by Partitioning Gradients. (ACL 2023 Findings) <br /> Charles Yu, Sullam Jeoung, Anish Kasi, Pengfei Yu, Heng Ji. [paper] [code]

  16. DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models (EMNLP 2023) <br /> Xinwei Wu, Junzhuo Li, Minghui Xu, Weilong Dong, Shuangzhi Wu, Chao Bian, Deyi Xiong. [paper]

  17. Untying the Reversal Curse via Bidirectional Language Model Editing <br /> Jun-Yu Ma, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu. [paper]

  18. Trace and Edit Relation Associations in GPT <br /> Jiahang Li,Taoyu Chen,Yuanli Wang. [paper]

  19. Consecutive Model Editing with Batch alongside HooK Layers <br /> Shuaiyi Li,Yang Deng,Deng Cai,Hongyuan Lu,Liang Chen,Wai Lam. [paper]

  20. A Unified Framework for Model Editing <br /> Akshat Gupta,Dev Sajnani,Gopala Anumanchipalli. [paper]

  21. Detoxifying Large Language Models via Knowledge Editing <br /> Mengru Wang, Ningyu Zhang, Ziwen Xu, Zekun Xi, Shumin Deng, Yunzhi Yao, Qishen Zhang, Linyi Yang, Jindong Wang, Huajun Chen. [paper]

  22. Locating and Editing Factual Associations in Mamba <br /> Arnab Sen Sharma,David Atkinson,David Bau. [paper]

  23. Large Language Model Bias Mitigation from the Perspective of Knowledge Editing <br /> Ruizhe Chen, Yichen Li, Zikai Xiao, Zuozhu Liu. [paper]

  24. WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing <br /> Chenhui Hu, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao. [paper]

  25. ReFACT: Updating Text-to-Image Models by Editing the Text Encoder <br /> Dana Arad, Hadas Orgad, Yonatan Belinkov. [paper]

  26. Editing Implicit Assumptions in Text-to-Image Diffusion Models <br /> Hadas Orgad, Bahjat Kawar, Yonatan Belinkov. [paper]

More Related Papers

  1. FRUIT: Faithfully Reflecting Updated Information in Text. (NAACL 2022) <br /> Robert L. Logan IV, Alexandre Passos, Sameer Singh, Ming-Wei Chang. [paper] [code]

  2. Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning. (EMNLP 2022) <br /> Oyvind Tafjord, Bhavana Dalvi Mishra, Peter Clark. [paper] [code] [video]

  3. Towards Tracing Factual Knowledge in Language Models Back to the Training Data. <br /> Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu. (EMNLP 2022) [paper]

  4. Prompting GPT-3 To Be Reliable. <br /> Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, Lijuan Wang. [paper]

  5. Patching open-vocabulary models by interpolating weights. (NeurIPS 2022) <br /> Gabriel Ilharco, Mitchell Wortsman, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, Ludwig Schmidt. [paper] [code]

  6. Decouple knowledge from paramters for plug-and-play language modeling (ACL2023 Findings) <br /> Xin Cheng, Yankai Lin, Xiuying Chen, Dongyan Zhao, Rui Yan.[paper] [code]

  7. Backpack Language Models <br /> John Hewitt, John Thickstun, Christopher D. Manning, Percy Liang. [paper]

  8. Learning to Model Editing Processes. (EMNLP 2022) <br /> Machel Reid, Graham Neubig. [paper]

  9. Trends in Integration of Knowledge and Large Language Models: A Survey and Taxonomy of Methods, Benchmarks, and Applications. <br /> Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting liu. [paper]

  10. DUnE: Dataset for Unified Editing. (EMNLP 2023) <br /> Afra Feyza Akyürek, Eric Pan, Garry Kuwanto, Derry Wijaya. [paper]

  11. See the Unseen: Better Context-Consistent Knowledge-Editing by Noises. <br /> Youcheng Huang, Wenqiang Lei, Zheng Zhang, Jiancheng Lv, Shuicheng Yan. [paper]

  12. Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models. <br /> Rima Hazra, Sayan Layek, Somnath Banerjee, Soujanya Poria. [paper]

  13. Model Editing with Canonical Examples. <br /> John Hewitt, Sarah Chen, Lanruo Lora Xie. [paper]

  14. EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries. <br /> Jiateng Liu,Pengfei Yu,Yuji Zhang,Sha Li,Zixuan Zhang,Heng Ji. [paper]

  15. Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models. <br /> Tianjie Ju,Yijin Chen,Xinwei Yuan,Zhuosheng Zhang,Wei Du,Yubin Zheng,Gongshen Liu. [paper]

  16. Knowledge Graph Enhanced Large Language Model Editing. <br /> Mengqi Zhang,Xiaotian Ye,Qiang Liu,Pengjie Ren,Shu Wu,Zhumin Chen. [paper]

  17. Editing Factual Knowledge and Explanatory Ability of Medical Large Language Models. <br /> Derong Xu, Ziheng Zhang, Zhihong Zhu, Zhenxi Lin. [paper]

  18. KEBench: A Benchmark on Knowledge Editing for Large Vision-Language Models. <br /> Han Huang, Haitian Zhong, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan. [paper]

  19. COLLABEDIT: TOWARDS NON-DESTRUCTIVE COLLABORATIVE KNOWLEDGE EDITING. <br /> Jiamu Zheng, Jinghuai Zhang, Futing Wang, Tianyu Du, Tao Lin. [paper]

  20. TAXI: Evaluating Categorical Knowledge Editing for Language Models. <br /> Derek Powell, Walter Gerych, Thomas Hartvigsen. [paper]

  21. Large Scale Knowledge Washing. <br /> Yu Wang, Ruihan Wu, Zexue He, Xiusi Chen, Julian McAuley. [paper]

  22. Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing. <br /> Wei Zhao, Zhe Li, Yige Li, Ye Zhang, Jun Sun. [paper]

  23. Outdated Issue Aware Decoding for Factual Knowledge Editing. <br /> Zengkui Sun, Yijin Liu, Jiaan Wang, Fandong Meng, Jinan Xu, Yufeng Chen, Jie Zhou. [paper]

  24. Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities. <br /> Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Hongcheng Gao, Yilong Xu, Xueqi Cheng. [paper]

  25. Language Modeling with Editable External Knowledge. <br /> Belinda Z. Li, Emmy Liu, Alexis Ross, Abbas Zeitoun, Graham Neubig, Jacob Andreas. [paper]

  26. Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations. <br /> Rima Hazra, Sayan Layek, Somnath Banerjee, Soujanya Poria. [paper]

  27. Enhancing Data Privacy in Large Language Models through Private Association Editing. <br /> Kunquan Deng, Zeyu Huang, Chen Li, Chenghua Lin, Min Gao, Wenge Rong. [paper]

  28. PFME: A Modular Approach for Fine-grained Hallucination Detection and Editing of Large Language Models. <br /> Davide Venditti, Elena Sofia Ruzzetti, Giancarlo A. Xompero, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, Fabio Massimo Zanzotto. [paper]

  29. SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models. <br /> Somnath Banerjee, Soham Tripathy, Sayan Layek, Shanu Kumar, Animesh Mukherjee, Rima Hazra. [paper]

  30. Editing Implicit Assumptions in Text-to-Image Diffusion Models. (ICCV 2023) <br /> Hadas Orgad, Bahjat Kawar, Yonatan Belinkov. [paper]

  31. ReFACT: Updating Text-to-Image Models by Editing the Text Encoder. (NAAC 2024) <br /> Dana Arad, Hadas Orgad, Yonatan Belinkov. [paper]

  32. MC-MKE: A Fine-Grained Multimodal Knowledge Editing Benchmark Emphasizing Modality Consistency. <br /> Junzhe Zhang, Huixuan Zhang, Xunjian Yin, Baizhou Huang, Xu Zhang, Xinyu Hu, Xiaojun Wan. [paper]

  33. LLM-Based Multi-Hop Question Answering with Knowledge Graph Integration in Evolving Environments. <br /> Ruirui Chen, Weifeng Jiang, Chengwei Qin, Ishaan Singh Rawal, Cheston Tan, Dongkyu Choi, Bo Xiong, Bo Ai. [paper]

  34. Pioneering Reliable Assessment in Text-to-Image Knowledge Editing: Leveraging a Fine-Grained Dataset and an Innovative Criterion. <br /> Hengrui Gu, Kaixiong Zhou, Yili Wang, Ruobing Wang, Xin Wang. [paper]

  35. Towards Unified Multimodal Editing with Enhanced Knowledge Collaboration. <br /> Kaihang Pan, Zhaoyu Fan, Juncheng Li, Qifan Yu, Hao Fei, Siliang Tang, Richang Hong, Hanwang Zhang, Qianru Sun. [paper]

Analysis

  1. Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models. <br /> Peter Hase, Mohit Bansal, Been Kim, Asma Ghandeharioun. [paper] [code]
  2. Dissecting Recall of Factual Associations in Auto-Regressive Language Models <br /> Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson. [paper]
  3. Evaluating the Ripple Effects of Knowledge Editing in Language Models <br /> Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, Mor Geva. [paper]
  4. Edit at your own risk: evaluating the robustness of edited models to distribution shifts. <br /> Davis Brown, Charles Godfrey, Cody Nizinski, Jonathan Tu, Henry Kvinge. [paper]
  5. Journey to the Center of the Knowledge Neurons: Discoveries of Language-Independent Knowledge Neurons and Degenerate Knowledge Neurons. (AAAI 2024) <br /> Yuheng Chen, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao. [paper]
  6. Linearity of Relation Decoding in Transformer Language Models<br /> Evan Hernandez, Martin Wattenberg, Arnab Sen Sharma, Jacob Andreas, Tal Haklay, Yonatan Belinkov, Kevin Meng, David Bau. [paper]
  7. KLoB: a Benchmark for Assessing Knowledge Locating Methods in Language Models<br /> Yiming Ju, Zheng Zhang. [paper]
  8. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model (NeurIPS 2023) <br /> Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg. [paper] [code]
  9. Emptying the Ocean with a Spoon: Should We Edit Models? (EMNLP 2023 Findings) <br /> Yuval Pinter and Michael Elhadad. [paper]
  10. Unveiling the Pitfalls of Knowledge Editing for Large Language Models <br /> Zhoubo Li, Ningyu Zhang, Yunzhi Yao, Mengru Wang, Xi Chen and Huajun Chen. [paper]
  11. Editing Personality for LLMs <br /> Shengyu Mao, Ningyu Zhang, Xiaohan Wang, Mengru Wang, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang and Huajun Chen. [paper]
  12. Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness(Findings of EMNLP2023) <br /> Zichao Li, Ines Arous, Siva Reddy, Jackie C.K. Cheung [paper]
  13. Finding and Editing Multi-Modal Neurons in Pre-Trained Transformer <br /> Haowen Pan,Yixin Cao,Xiaozhi Wang,Xun Yang. [paper]
  14. Assessing Knowledge Editing in Language Models via Relation Perspective <br /> Yifan Wei,Xiaoyan Yu,Huanhuan Ma,Fangyu Lei,Yixuan Weng,Ran Song,Kang Liu. [paper]
  15. History Matters: Temporal Knowledge Editing in Large Language Model(AAAI 2024) <br /> Xunjian Yin,Jin Jiang,Liming Yang,Xiaojun Wan. [paper]
  16. Cross-Lingual Knowledge Editing in Large Language Models <br /> Jiaan Wang, Yunlong Liang, Zengkui Sun, Yuxuan Cao, Jiarong Xu. [paper]
  17. Large Language Models Relearn Removed Concepts <br /> Michelle Lo, Shay B. Cohen, Fazl Barez [paper]
  18. Model Editing Can Hurt General Abilities of Large Language Models <br /> Jia-Chen Gu, Hao-Xiang Xu, Jun-Yu Ma, Pan Lu, Zhen-Hua Ling, Kai-Wei Chang, Nanyun Peng [paper]
  19. Model Editing at Scale leads to Gradual and Catastrophic Forgetting <br /> Akshat Gupta, Anurag Rao, Gopala Anumanchipalli. [paper]
  20. Propagation and Pitfalls: Reasoning-based Assessment of Knowledge Editing through Counterfactual Tasks <br /> Wenyue Hua, Jiang Guo, Mingwen Dong, Henghui Zhu, Patrick Ng, Zhiguo Wang. [paper]
  21. Long-form evaluation of model editing <br /> Domenic Rosati, Robie Gonzales, Jinkun Chen, Xuemin Yu. [paper]
  22. The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse <br /> Wanli Yang, Fei Sun, Xinyu Ma, Xun Liu, Dawei Yin, Xueqi Cheng. [paper]
  23. The Da Vinci Code of Large Pre-trained Language Models: Deciphering Degenerate Knowledge Neurons <br /> Yuheng Chen,Pengfei Cao,Yubo Chen,Yining Wang,Shengping Liu,Kang Liu,Jun Zhao. [paper]
  24. Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models <br /> Zihao Lin, Mohammad Beigi, Hongxuan Li, Yufan Zhou, Yuxiang Zhang, Qifan Wang, Wenpeng Yin, Lifu Huang. [paper]
  25. “Flex Tape Can’t Fix That”:Bias and Misinformation in Edited Language Models <br /> Karina Halevy, Anna Sotnikova, Badr AlKhamissi, Syrielle Montariol, Antoine Bosselut. [paper]
  26. The Missing Piece in Model Editing: A Deep Dive into the Hidden Damage Brought By Model Editing <br /> Jianchen Wang,Zhouhong Gu,Zhuozhi Xiong,Hongwei Feng,Yanghua Xiao. [paper]
  27. Beyond Memorization: The Challenge of Random Memory Access in Language Models <br /> Tongyao Zhu,Qian Liu,Liang Pang,Zhengbao Jiang,Min-Yen Kan,Min Lin. [paper]
  28. Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models <br /> Ang Lv, Kaiyi Zhang, Yuhan Chen, Yulong Wang, Lifeng Liu, Ji-Rong Wen, Jian Xie, Rui Yan. [paper]
  29. MLaKE: Multilingual Knowledge Editing Benchmark for Large Language Models <br /> Zihao Wei,Jingcheng Deng,Liang Pang,Hanxing Ding,Huawei Shen,Xueqi Cheng. [paper]
  30. Is Your LLM Outdated? Benchmarking LLMs & Alignment Algorithms for Time-Sensitive Knowledge <br /> Seyed Mahed Mousavi, Simone Alghisi, Giuseppe Riccardi. [paper]
  31. Neighboring Perturbations of Knowledge Editing on Large Language Models(ICML 2024)<br /> Jun-Yu Ma, Jia-Chen Gu, Ningyu Zhang, Zhen-Hua Ling. [paper]
  32. Event-level Knowledge Editing<br /> Hao Peng, Xiaozhi Wang, Chunyang Li, Kaisheng Zeng, Jiangshan Duo, Yixin Cao, Lei Hou, Juanzi Li. [paper]
  33. Updating Language Models with Unstructured Facts: Towards Practical Knowledge Editing<br /> Xiaobao Wu, Liangming Pan, William Yang Wang, Anh Tuan Luu. [paper]
  34. Detecting Edited Knowledge in Language Models<br /> Paul Youssef, Zhixue Zhao, Jörg Schlötterer, Christin Seifert. [paper]
  35. Perturbation-Restrained Sequential Model Editing<br /> Jun-Yu Ma, Hong Wang, Hao-Xiang Xu, Zhen-Hua Ling, Jia-Chen Gu. [paper]
  36. Leveraging Logical Rules in Knowledge Editing: A Cherry on the Top<br /> Keyuan Cheng, Muhammad Asif Ali, Shu Yang, Gang Lin, Yuxuan Zhai, Haoyang Fei, Ke Xu, Lu Yu, Lijie Hu, Di Wang. [paper]
  37. Model Editing by Standard Fine-Tuning<br /> Govind Gangadhar, Karl Stratos. [paper]
  38. AI-native Memory: A Pathway from LLMs Towards AGI<br /> Jingbo Shang, Zai Zheng, Xiang Ying, Felix Tao, Mindverse Team. [paper]
  39. Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces<br /> Yihuai Hong, Lei Yu, Shauli Ravfogel, Haiqin Yang, Mor Geva. [paper]
  40. Can Editing LLMs Inject Harm?<br /> Canyu Chen, Baixiang Huang, Zekun Li, Zhaorun Chen, Shiyang Lai, Xiongxiao Xu, Jia-Chen Gu, Jindong Gu, Huaxiu Yao, Chaowei Xiao, Xifeng Yan, William Yang Wang, Philip Torr, Dawn Song, Kai Shu. [paper]
  41. Editing Conceptual Knowledge for Large Language Models (EMNLP 2024 Findings) <br /> Xiaohan Wang, Shengyu Mao, Ningyu Zhang, Shumin Deng, Yunzhi Yao, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen. [paper]
  42. Knowledge Circuits in Pretrained Transformers (NeurIPS 2024) <br /> Yunzhi Yao, Ningyu Zhang, Zekun Xi, Mengru Wang, Ziwen Xu, Shumin Deng, Huajun Chen. [paper]
  43. "Why" Has the Least Side Effect on Model Editing <br /> Tsung-Hsuan Pan, Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen. [paper]
  44. Cross-Lingual Multi-Hop Knowledge Editing. <br /> Aditi Khandelwal, Harman Singh, Hengrui Gu, Tianlong Chen, Kaixiong Zhou. [paper]
  45. Can We Reverse In-Context Knowledge Edits?<br /> Paul Youssef, Zhixue Zhao, Jörg Schlötterer, Christin Seifert. [paper]
  46. Model Editing for LLMs4Code: How Far are We?<br /> Xiaopeng Li, Shangwen Wang, Shasha Li, Jun Ma, Jie Yu, Xiaodong Liu, Jing Wang, Bin Ji, Weimin Zhang. [paper]
  47. Backward Lens: Projecting Language Model Gradients into the Vocabulary Space<br /> Shahar Katz, Yonatan Belinkov, Mor Geva, Lior Wolf. [paper]

🧰 Resources

Benchmarks and Tasks

Edit TypeBenchmarks & Datasets
Fact KnowledgeZSRE, ZSRE plus, CounterFact,CounterFact plus, CounterFact+,ECBD, MQUAKE,DepEdit
Multi-LingualBi-ZsRE,Eva-KELLM, MzsRE,CROLIN-MQUAKE
SentimentConvsent
BiasBias in Bios
HallucinationWikiBio
CommonsenseMEMIT<sub>csk</sub>
conceptConceptEdit, CONCEPTVECTORS
ReasoningEva-KELLM
Privacy Infomation ProtectPrivQA, Knowledge Sanitation,Enron
Unified BenchmarkDUnE
Toxic InformationRealToxicityPrompts,Toxicity Unlearning
MultiModalMMEdit VLKEB, MC-MKE
<!-- | **Logical Reasoning** | [ProofWriter](https://arxiv.org/abs/2012.13048), [EntailmentBank](https://arxiv.org/abs/2104.08661), [RuleTaker](https://www.ijcai.org/proceedings/2020/537), [CLUTRR](https://aclanthology.org/D19-1458/) | | **Multimodal Reasoning** | [SCIENCEQA](https://scienceqa.github.io/) | | **Code** |[CodeUpdateArena](https://www.arxiv.org/pdf/2407.06249) | **Others** | [BIG-bench](https://doi.org/10.48550/arXiv.2206.04615), [SCAN](http://proceedings.mlr.press/v80/lake18a.html), [Chain-of-Thought Hub](https://arxiv.org/abs/2305.17306) | -->

Tools

EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models.

FastEdit: Editing large language models within 10 seconds

Citation

Please cite our paper if find our work useful.


@article{zhang2024comprehensive,
  title={A Comprehensive Study of Knowledge Editing for Large Language Models},
  author={Zhang, Ningyu and Yao, Yunzhi and Tian, Bozhong and Wang, Peng and Deng, Shumin and Wang, Mengru and Xi, Zekun and Mao, Shengyu and Zhang, Jintian and Ni, Yuansheng and others},
  journal={arXiv preprint arXiv:2401.01286},
  year={2024}
}

@article{DBLP:journals/corr/abs-2305-13172,
  author       = {Yunzhi Yao and
                  Peng Wang and
                  Bozhong Tian and
                  Siyuan Cheng and
                  Zhoubo Li and
                  Shumin Deng and
                  Huajun Chen and
                  Ningyu Zhang},
  title        = {Editing Large Language Models: Problems, Methods, and Opportunities},
  journal      = {CoRR},
  volume       = {abs/2305.13172},
  year         = {2023},
  url          = {https://doi.org/10.48550/arXiv.2305.13172},
  doi          = {10.48550/arXiv.2305.13172},
  eprinttype    = {arXiv},
  eprint       = {2305.13172},
  timestamp    = {Tue, 30 May 2023 17:04:46 +0200},
  biburl       = {https://dblp.org/rec/journals/corr/abs-2305-13172.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

@inproceedings{wang2024knowledge,
  title={Knowledge Mechanisms in Large Language Models: A Survey and Perspective},
  author={Wang, Mengru and Yao, Yunzhi and Xu, Ziwen and Qiao, Shuofei and Deng, Shumin and Wang, Peng and Chen, Xiang and Gu, Jia-Chen and Jiang, Yong and Xie, Pengjun and others},
  booktitle={Findings of the Association for Computational Linguistics: EMNLP 2024},
  pages={7097--7135},
  year={2024}
}

🎉Contribution

Contributors

<a href="https://github.com/zjunlp/ModelEditingPapers/graphs/contributors"> <img src="https://contrib.rocks/image?repo=zjunlp/ModelEditingPapers" /> </a>

Contributing to this paper list

Acknowledgement