Home

Awesome

<div align="center"> <img src="figs/logo.png" width="180px">

An Easy-to-use Knowledge Editing Framework for Large Language Models.

License: MIT Static Badge


<p align="center"> <a href="#requirements">Installation</a> • <a href="#use-easyedit">QuickStart</a> • <a href="https://zjunlp.gitbook.io/easyedit">Doc</a> • <a href="https://arxiv.org/abs/2401.01286">Paper</a> • <a href="https://huggingface.co/spaces/zjunlp/EasyEdit">Demo</a> • <a href="https://huggingface.co/datasets/zjunlp/KnowEdit">Benchmark</a> • <a href="#contributors">Contributors</a> • <a href="https://github.com/zjunlp/EasyEdit/blob/main/tutorial.pdf">Slides</a> • <a href="https://youtu.be/Gm6T0QaaskU", target="_blank">Video</a> • <a href="https://twitter.com/_akhaliq/status/1742371655765164133", target="_blank">Featured By AK</a> </p> </div>

Table of Contents

🔔News

<details> <summary><b>Previous News</b></summary> <!-- - **2024-02-20 The AAAI2024 tutorial "*Knowledge Editing for Large Language Models*" has been canceled since speakers cannot present in person, we make this ppt[[Github](https://github.com/zjunlp/KnowledgeEditingPapers/blob/main/AAAI2024%40Tutorial_Knowledge%20Editing%20for%20LLMs.pdf)] [[Google Drive](https://drive.google.com/file/d/1fkTbVeRJSWmU7fBDeNf1OhHEkLSofQde/view?usp=sharing)] [[Baidu Pan](https://pan.baidu.com/s/1oJYgaMnxWIBE4kIcJuMSKg?pwd=p9j5)] available to the community**. --> </details> <!-- **EasyEdit** is now publicly open-sourced, with a [demo video](https://www.youtube.com/watch?v=NaQRvSYuQMo) and long-term maintenance. -->

A Comprehensive Study of Knowledge Editing for Large Language Models [paper][benchmark][code]

IJCAI 2024 Tutorial Google Drive

COLING 2024 Tutorial Google Drive

AAAI 2024 Tutorial Google Drive

AACL 2023 Tutorial [Google Drive] [Baidu Pan]

Editing Demo

There is a demonstration of editing. The GIF file is created by Terminalizer. <br>

We provide a handy Jupyter Notebook! It allows you to edit a LLM's knowledge of the US president, switching from Biden to Trump and even back to Biden. This includes methods like WISE, AlphaEdit, AdaLoRA, and Prompt-based editing.

<img src="figs/demo_usage_new.gif" width="550" height="470" align=center>

Knowledge Editing

<div align=center><img src="./figs/ke.png" width="100%" height="80%" /></div>

Task Definition

Deployed models may still make unpredictable errors. For example, LLMs notoriously hallucinate, perpetuate bias, and factually decay, so we should be able to adjust specific behaviors of pre-trained models.

Knowledge editing aims to adjust base model's $(f_\theta)$ behavior on the particular edit descriptor $[x_e, y_e]$​​​ efficiently.

Multi Setting

Single Knowledge Editing

Evaluating the performance of the model after a single edit. The model reloads the original weights (e.g. LoRA discards the adapter weights) after a single edit. You should set sequential_edit=False

$$\theta' \leftarrow \text{arg} \min\limits_{\theta} (\Vert f_\theta(x_e) - y_e \Vert)$$

Continuous Knowledge Editing

This requires sequentially editing, and evaluation is performed after all knowledge updates have been applied:

$$\theta' \leftarrow \text{arg} \min\limits_{\theta} \sum_{e=1}^{\Vert X_e \Vert} (\Vert f_\theta(x_e) - y_e \Vert)$$

It makes parameter adjustments for $(x_e, y_e)$, where $x_e \in X_e$ and $f_\theta'(x_e) = y_e$. Here, $X_e$​ represents the whole edit set. To enable continuous editing, you can set sequential_edit=True: README (for more details).

Multi Scenario

<details><summary> <b> Factual Knowledge Editing </b> </summary>
Knowledge insert
Knowledge update
Knowledge erase

Without influencing the model behavior on unrelated samples, the ultimate goal is to create an edited model $(f_\theta')$​​.

</details> <details><summary> <b> Safety Editing </b> </summary> **Detoxifying LLM** strives to build a safe and trustworthy large language model (LLM). Knowledge editing focuses on specific areas for permanent adjustment without compromising overall performance. Then, detoxifying LLM via knowledge editing leverages a small amount of data, usually an instance, to correct the toxic behaviors of the LLM. The edited LLM can defend against various malicious inputs. [README](https://github.com/zjunlp/EasyEdit/blob/main/examples/SafeEdit.md) </details> <details><summary> <b> MultiModal Model Editing </b> </summary>

Editing Task for Image Captioning and Visual Question Answering. README

</details> <details><summary> <b> Personality Editing </b> </summary>

The proposed task takes the preliminary attempt to edit LLMs' personalities by editing their opinions on specific topics, given that an individual's opinions can reflect aspects of their personality traits. We draw upon the established BIG FIVE theory as a basis for constructing our dataset and assessing the LLMs' personality expressions. README

Evaluation

Logits-based

Generation-based

While for assessing Acc and TPEI, you can download the trained classifier from here.

</details>

Comparisons of different technologies

<div align=center><img src="./figs/comparison.png" width="60%" height="48%" /></div>

Evaluation

The knowledge editing process generally impacts the predictions for a broad set of inputs that are closely associated with the edit example, called the editing scope.

A successful edit should adjust the model’s behavior within the editing scope while remaining unrelated inputs:

$$ f_{\theta_{e}}(x) = \begin{cases} y_e & \text{if } x \in I(x_e,y_e) \ f_{\theta}(x) & \text{if } x \in O(x_e, y_e) \end{cases} $$

🌟Overview

EasyEdit is a Python package for edit Large Language Models (LLM) like GPT-J, Llama, GPT-NEO, GPT2, T5(support models from 1B to 65B), the objective of which is to alter the behavior of LLMs efficiently within a specific domain without negatively impacting performance across other inputs. It is designed to be easy to use and easy to extend.

<h3 align="center"> <img src="figs/FrameWork.png"> </h3>

Current Implementation

You can choose different editing methods according to your specific needs.

MethodT5GPT-2GPT-JGPT-NEOLlaMABaichuanChatGLMInternLMQwenMistral
FT
AdaLoRA
SERAC
IKE
MEND
KN
ROME
r-ROME
MEMIT
EMMET
GRACE
MELO
PMET
InstructEdit
DINM
WISE
AlphaEdit
<!-- | KE | ✅ | ✅ | ✅ | | | --> <!-- | **Method** | Model Name | Description | | :--------: | :--------: | :--------: | | [FT-Api](https://openai.com/blog/gpt-3-5-turbo-fine-tuning-and-api-updates) | [gpt-3.5-turbo(ChatGPT)](https://github.com/zjunlp/EasyEdit/blob/main/hparams/FT-Api/gpt-3.5-turbo.yaml) | official fine-tuing Api for gpt-3.5-turbo | -->

❗️❗️ If you intend to use Mistral, please update the transformers library to version 4.34.0 manually. You can use the following code: pip install transformers==4.34.0.

Quick Start on Some Works

WorkDescriptionPath
InstructEditInstructEdit: Instruction-based Knowledge Editing for Large Language ModelsQuick Start
DINMDetoxifying Large Language Models via Knowledge EditingQuick Start
WISEWISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language ModelsQuick Start
ConceptEditEditing Conceptual Knowledge for Large Language ModelsQuick Start
MMEditCan We Edit Multimodal Large Language Models?Quick Start
PersonalityEditEditing Personality For Large Language ModelsQuick Start
PROMPTPROMPT-based knowledge editing methodsQuick Start

Dataset

Benchmark: KnowEdit [Hugging Face][WiseModel][ModelScope]

❗️❗️ To be noted, KnowEdit is constructed by re-organizing and extending existing datasests including WikiBio, ZsRE, WikiData<sub>Counterfact</sub>, WikiData<sub>Recent</sub>, convsent, Sanitation to make a comprehensive evaluation for knowledge editing. Special thanks to the builders and maintainers of the those datasets.

Please note that Counterfact and WikiData<sub>Counterfact</sub> are not the same dataset.

<table class="tg"> <thead> <tr> <th class="tg-7btt">Task</th> <th class="tg-7btt">Knowledge Insertion</th> <th class="tg-7btt" colspan="4">Knowledge Modification</th> <th class="tg-7btt">Knowledge Erasure</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow">Datasets</td> <td class="tg-c3ow">Wiki<sub>recent</sub></td> <td class="tg-c3ow">ZsRE</td> <td class="tg-c3ow">WikiBio</td> <td class="tg-c3ow"> WikiData<sub>counterfact</sub></td> <td class="tg-c3ow">Convsent</td> <td class="tg-c3ow">Sanitation</td> </tr> <tr> <td class="tg-c3ow">Type</td> <td class="tg-c3ow">Fact</td> <td class="tg-c3ow">Question Answering</td> <td class="tg-c3ow">Hallucination</td> <td class="tg-c3ow">Counterfact</td> <td class="tg-c3ow">Sentiment</td> <td class="tg-c3ow">Unwanted Info</td> </tr> <tr> <td class="tg-c3ow"># Train</td> <td class="tg-c3ow">570</td> <td class="tg-c3ow">10,000</td> <td class="tg-c3ow">592</td> <td class="tg-c3ow">1,455</td> <td class="tg-c3ow">14,390</td> <td class="tg-c3ow">80</td> </tr> <tr> <td class="tg-c3ow"># Test</td> <td class="tg-c3ow">1,266</td> <td class="tg-c3ow">1301</td> <td class="tg-c3ow">1,392</td> <td class="tg-c3ow">885</td> <td class="tg-c3ow">800</td> <td class="tg-c3ow">80</td> </tr> </tbody> </table>

We provide detailed scripts for user to easily use KnowEdit, please refer to examples.

<details><summary> <b> dataset description </b> </summary> </details> <details><summary> <b> dataset structure </b> </summary>
knowedit
├── WikiBio
│   ├── wikibio-test-all.json
│   └── wikibio-train-all.json
├── ZsRE
│   └── ZsRE-test-all.json
├── wiki_counterfact
│   ├── test_cf.json
│   └── train_cf.json
├── convsent
│   ├── blender_test.json
│   ├── blender_train.json
│   └── blender_val.json
├── convsent
│   ├── trivia_qa_test.json
│   └── trivia_qa_train.json
└── wiki_recent
    ├── recent_test.json
    └── recent_train.json
</details>

Datasets for Chinese Knowledge: CKnowEdit

datasetHuggingFaceWiseModelModelScopeDescription
CKnowEdit[HuggingFace][WiseModel][ModelScope]dataset for editing Chinese Knowledge
<details><summary> <b> dataset description </b> </summary>

CKnowEdit is a high-quality Chinese-language dataset for knowledge editing which is highly characterized by the Chinese language, with all data sourced from Chinese knowledge bases. It is meticulously designed to more deeply discern the nuances and challenges inherent in the comprehension of the Chinese language by current LLMs, providing a robust resource for refining Chinese-specific knowledge within LLMs.

The field descriptions for the data in CKnowEdit are as follows:

"prompt": query inputed to the model (str)
"target_old": the incorrect response previously generated by the model (str)
"target_new": the accurate answer of the prompt (str)
"portability_prompt": new prompts related to the target knowledge (list or None)
"portability_answer": accurate answers corresponding to the portability_prompt (list or None)
"locality_prompt": new prompts unrelated to the target knowledge (list or None)
"locality_answer": accurate answers corresponding to the locality_prompt (list or None)
"rephrase": alternative ways to phrase the original prompt (list)
</details> <details><summary> <b> dataset structure </b> </summary>
CknowEdit
├── Chinese Literary Knowledge
│   ├── Ancient Poetry
│   ├── Proverbs
│   └── Idioms
├── Chinese Linguistic Knowledge
│   ├── Phonetic Notation
│   └── Classical Chinese
├── Chinese Geographical Knowledge
└── Ruozhiba
</details>

Datasets for Factual Knowledge

datasetGoogle DriveBaiduNetDiskDescription
ZsRE plus[Google Drive][BaiduNetDisk]Question Answering dataset using question rephrasings
Counterfact plus[Google Drive][BaiduNetDisk]Counterfact dataset using Entity replacement

We provide zsre and counterfact datasets to verify the effectiveness of knowledge editing. You can download them here. [Google Drive], [BaiduNetDisk].

<details><summary> <b> dataset description </b> </summary>
editing-data
├── counterfact
│   ├── counterfact-edit.json
│   ├── counterfact-train.json
│   └── counterfact-val.json
├── locality
│   ├── Commonsense Task
│   │   ├── piqa_valid-labels.lst
│   │   └── piqa_valid.jsonl
│   ├── Distracting Neighbor
│   │   └── counterfact_distracting_neighbor.json
│   └── Other Attribution
│       └── counterfact_other_attribution.json
├── portability
│   ├── Inverse Relation
│   │   └── zsre_inverse_relation.json
│   ├── One Hop
│   │   ├── counterfact_portability_gpt4.json
│   │   └── zsre_mend_eval_portability_gpt4.json
│   └── Subject Replace
│       ├── counterfact_subject_replace.json
│       └── zsre_subject_replace.json
└── zsre
    ├── zsre_mend_eval.json
    ├── zsre_mend_train_10000.json
    └── zsre_mend_train.json

Datasets for Conceptual Knowledge: ConceptEdit

datasetGoogle DriveHuggingFace DatasetDescription
ConceptEdit[Google Drive][HuggingFace Dataset]dataset for editing conceptual knowledge
<details><summary> <b> dataset description </b> </summary>
data
└──concept_data.json
    ├──final_gpt2_inter.json
    ├──final_gpt2_intra.json
    ├──final_gptj_inter.json
    ├──final_gptj_intra.json
    ├──final_llama2chat_inter.json
    ├──final_llama2chat_intra.json
    ├──final_mistral_inter.json
    └──final_mistral_intra.json

Concept Specific Evaluation Metrics

</details>

Datasets for Multimodal Knowledge: MMEdit

datasetGoogle DriveBaiduNetDiskDescription
E-IC[Google Drive][BaiduNetDisk]dataset for editing Image Captioning
E-VQA[Google Drive][BaiduNetDisk]dataset for editing Visual Question Answering
<details><summary> <b> dataset description </b> </summary>
editing-data
├── caption
│   ├── caption_train_edit.json
│   └── caption_eval_edit.json
├── locality
│   ├── NQ dataset
│   │   ├── train.json
│   │   └── validation.json
├── multimodal_locality
│   ├── OK-VQA dataset
│   │   ├── okvqa_loc.json
└── vqa
    ├── vqa_train.json
    └── vqa_eval.json
</details>

Datasets for detoxifying LLMs: SafeEdit

datasetHuggingFace DatasetDescription
SafeEdit[HuggingFace Dataset]dataset for detoxifying LLMs
<details><summary> <b> dataset description </b> </summary>
data
└──SafeEdit_train.json
└──SafeEdit_val.json
└──SafeEdit_test.json
    

Detoxifying Specific Evaluation Metrics

</details>

Tutorial notebook

MethodDescriptionGPT-2LlaMA
IKEIn-Context Learning (ICL) Edit[Colab-gpt2][Colab-llama]
ROMELocate-Then-Edit Neurons[Colab-gpt2][Colab-llama]
MEMITLocate-Then-Edit Neurons[Colab-gpt2][Colab-llama]

Requirements

🔧Pip Installation

Note: Please use Python 3.9+ for EasyEdit To get started, simply install conda and run:

git clone https://github.com/zjunlp/EasyEdit.git
conda create -n EasyEdit python=3.9.7
...
pip install -r requirements.txt

Editing GPU memory usage

Our results are all based on the default configuration

llama-2-7Bchatglm2gpt-j-6bgpt-xl
FT60GB58GB55GB7GB
SERAC42GB32GB31GB10GB
IKE52GB38GB38GB10GB
MEND46GB37GB37GB13GB
KN42GB39GB40GB12GB
ROME31GB29GB27GB10GB
MEMIT33GB31GB31GB11GB
AdaLoRA29GB24GB25GB8GB
GRACE27GB23GB6GB
WISE34GB27GB7GB
<!-- editing multimodal -->

📌Use EasyEdit

BaseEditor

BaseEditoris the class for Language Modality Knowledge Editing. You can choose the appropriate editing method based on your specific needs.

Introduction by a Simple Example

With the modularity and flexibility of EasyEdit, you can easily use it to edit model.

Step1: Define a PLM as the object to be edited. Choose the PLM to be edited. EasyEdit supports partial models(T5, GPTJ, GPT-NEO, LlaMA so far) retrievable on HuggingFace. The corresponding configuration file directory is hparams/YUOR_METHOD/YOUR_MODEL.YAML, such as hparams/MEND/gpt2-xl.yaml, set the corresponding model_name to select the object for knowledge editing.

model_name: gpt2-xl
model_class: GPT2LMHeadModel
tokenizer_class: GPT2Tokenizer
tokenizer_name: gpt2-xl
model_parallel: false # true for multi-GPU editing

Step2: Choose the appropriate Knowledge Editing Method

## In this case, we use MEND method, so you should import `MENDHyperParams`
from easyeditor import MENDHyperParams
## Loading config from hparams/MEMIT/gpt2-xl.yaml
hparams = MENDHyperParams.from_hparams('./hparams/MEND/gpt2-xl')

Step3: Provide the edit descriptor and edit target

## edit descriptor: prompt that you want to edit
prompts = [
    'What university did Watts Humphrey attend?',
    'Which family does Ramalinaceae belong to',
    'What role does Denny Herzig play in football?'
]
## You can set `ground_truth` to None !!!(or set to original output)
ground_truth = ['Illinois Institute of Technology', 'Lecanorales', 'defender']
## edit target: expected output
target_new = ['University of Michigan', 'Lamiinae', 'winger']

Step4: Combine them into a BaseEditor EasyEdit provides a simple and unified way to init Editor, like huggingface: from_hparams.

## Construct Language Model Editor
editor = BaseEditor.from_hparams(hparams)

Step5: Provide the data for evaluation Note that the data for portability and locality are both optional(set to None for basic editing success rate evaluation only). The data format for both is a dict, for each measurement dimension, you need to provide the corresponding prompt and its corresponding ground truth. Here is an example of the data:

locality_inputs = {
    'neighborhood':{
        'prompt': ['Joseph Fischhof, the', 'Larry Bird is a professional', 'In Forssa, they understand'],
        'ground_truth': ['piano', 'basketball', 'Finnish']
    },
    'distracting': {
        'prompt': ['Ray Charles, the violin Hauschka plays the instrument', 'Grant Hill is a professional soccer Magic Johnson is a professional', 'The law in Ikaalinen declares the language Swedish In Loviisa, the language spoken is'],
        'ground_truth': ['piano', 'basketball', 'Finnish']
    }
}

In the above example, we evaluate the performance of the editing methods about "neighborhood" and "distracting".

Step6: Edit and Evaluation Done! We can conduct Edit and Evaluation for your model to be edited. The edit function will return a series of metrics related to the editing process as well as the modified model weights. [sequential_edit=True for continuous editing]

metrics, edited_model, _ = editor.edit(
    prompts=prompts,
    ground_truth=ground_truth,
    target_new=target_new,
    locality_inputs=locality_inputs,
    sequential_edit=False # True: start continuous editing ✈️
)
## metrics: edit success, rephrase success, locality e.g.
## edited_model: post-edit model

The maximum input length for EasyEdit is 512. If this length is exceeded, you will encounter the error "CUDA error: device-side assert triggered." You can modify the maximum length in the following file:LINK

Step7: RollBack In sequential editing, if you are not satisfied with the outcome of one of your edits and you do not wish to lose your previous edits, you can use the rollback feature to undo your previous edit. Currently, we only support the GRACE method. All you need to do is a single line of code, using the edit_key to revert your edit.

editor.rolllback('edit_key')

In EasyEdit, we default to using target_new as the edit_key

Evaluation

We specify the return metrics as dict format, including model prediction evaluations before and after editing. For each edit, it will include the following metrics:

{
    "post": {
        "rewrite_acc": ,
        "rephrase_acc": ,
        "locality": {
            "YOUR_LOCALITY_KEY": ,
            //...
        },
        "portablility": {
            "YOUR_PORTABILITY_KEY": ,
            //...
        },
    },
    "pre": {
        "rewrite_acc": ,
        "rephrase_acc": ,
        "portablility": {
            "YOUR_PORTABILITY_KEY": ,
            //...
        },
    }
}

Trainer

For above editing methods, pre-training of corresponding meta-networks or classifiers is required. Therefore, in EasyEdit, we provide a unified framework for pretraining the relevant network structures. Take the training MEND for example:

Step3: Provide the edit training set The currently supported and available datasets are: zsre and counterfact(Google Drive). Please place them in the "data" directory and initialize the dataset_class (ZsreDataset for zsre and CounterFactDataset for counterfact) to load the corresponding training set.

train_ds = ZsreDataset('./data/zsre_mend_train.json', config=training_hparams)
eval_ds = ZsreDataset('./data/zsre_mend_eval.json', config=training_hparams)

Step4: Combine them into a Trainer

trainer = EditTrainer(
    config=training_hparams,
    train_set=train_ds,
    val_set=eval_ds
)

Step5: Run and Edit Done! We can conduct Run and Evaluation.

trainer.run()

Training Example

from easyeditor import EditTrainer, MENDTrainingHparams, ZsreDataset

training_hparams = MENDTrainingHparams.from_hparams('hparams/TRAINING/MEND/llama-7b.yaml')
train_ds = ZsreDataset('./data/zsre/zsre_mend_train.json', config=training_hparams)
eval_ds = ZsreDataset('./data/zsre/zsre_mend_eval.json', config=training_hparams)
trainer = EditTrainer(
    config=training_hparams,
    train_set=train_ds,
    val_set=eval_ds
)
trainer.run()
<!-- ## Overall Results > Note that the following experimental results are from this [paper](https://arxiv.org/abs/2305.13172).The actual editing performance of this tool is still under testing and will be announced **as soon as possible**. * We tested the editing performance of different knowledge editing methods on various model, the test results are shown in the table below(`-` refers to the results that the methods empirically fail to edit LLMs). --> <!-- - For `zsre` dataset: <div style="text-align: center"> <table style="text-align: center"> <tr> <th></th><th colspan="3" style="text-align: center;">T5-3B</th><th colspan="3" style="text-align: center;">GPT-J</th> </tr> <tr> <td><b>Method</b></td><td>Reliability</td><td>Generalization</td><td>Locality</td><td>Reliability</td><td>Generalization</td><td>Locality</td> </tr> <tr> <td>FT</td><td>20.71</td><td>19.68</td><td>89.01</td><td>54.70</td><td>49.20</td><td>37.24</td> </tr> <tr> <td>SERAC</td><td>99.80</td><td>99.66</td><td>98.13</td><td>90.16</td><td>89.96</td><td>99.90</td> </tr> <tr> <td>IKE</td><td>67.00</td><td>67.11</td><td>63.60</td><td>99.96</td><td>99.87</td><td>59.21</td> </tr> <tr> <td>KE</td><td>3.00</td><td>5.40</td><td>96.43</td><td>6.60</td><td>7.80</td><td>94.18</td> </tr> <tr> <td>MEND</td><td>78.80</td><td>89.80</td><td>98.45</td><td>45.60</td><td>48.00</td><td>88.21</td> </tr> <tr> <td>KN</td><td>22.51</td><td>22.70</td><td>16.43</td><td>11.34</td><td>9.40</td><td>90.03</td> </tr> <tr> <td>ROME</td><td>-</td><td>-</td><td>-</td><td>99.18</td><td>94.90</td><td>99.19</td> </tr> <tr> <td>MEMIT</td><td>-</td><td>-</td><td>-</td><td>99.23</td><td>87.16</td><td>99.62</td> </tr> </table> </div> - For `counterfact` dataset: <div style="text-align: center"> <table style="text-align: center"> <tr> <th></th><th colspan="3" style="text-align: center;">T5-3B</th><th colspan="3" style="text-align: center;">GPT-J</th> </tr> <tr> <td><b>Method</b></td><td>Reliability</td><td>Generalization</td><td>Locality</td><td>Reliability</td><td>Generalization</td><td>Locality</td> </tr> <tr> <td>FT</td><td>33.57</td><td>23.54</td><td>72.72</td><td>99.90</td><td>97.53</td><td>1.02</td> </tr> <tr> <td>SERAC</td><td>99.89</td><td>98.71</td><td>99.93</td><td>99.78</td><td>99.41</td><td>98.89</td> </tr> <tr> <td>IKE</td><td>97.77</td><td>82.99</td><td>37.76</td><td>99.61</td><td>72.67</td><td>35.57</td> </tr> <tr> <td>KE</td><td>1.00</td><td>1.40</td><td>96.28</td><td>13.40</td><td>11.00</td><td>94.38</td> </tr> <tr> <td>MEND</td><td>81.40</td><td>93.40</td><td>91.58</td><td>73.80</td><td>74.20</td><td>93.75</td> </tr> <tr> <td>KN</td><td>47.86</td><td>46.78</td><td>57.10</td><td>1.66</td><td>1.38</td><td>58.28</td> </tr> <tr> <td>ROME</td><td>-</td><td>-</td><td>-</td><td>99.80</td><td>86.63</td><td>93.61</td> </tr> <tr> <td>MEMIT</td><td>-</td><td>-</td><td>-</td><td>99.90</td><td>73.13</td><td>97.17</td> </tr> </table> </div> -->

Use EasyEdit with KnowEdit

Dataset

KnowEdit is a benchmark dataset of knowledge editing for LLMs. You can easily obtain KnowEdit from HuggingFace, HuggingFace, and ModelScope.

datasetHuggingFaceHuggingFaceModelScope
KnowEdit[HuggingFace][WiseModel][ModelScope]

Usage

We provide detailed scripts for user to easily use KnowEdit, please refer to examples.

Editing Performance

We present editing results of the four metrics on LlaMA-2-7B using EasyEdit. We adopt ZsRE as the test dataset.

❗️❗️Editing llama-2-7B requires 40G+ VRAM on GPU. (OOM solution)

ReliabilityGeneralizationLocalityPortability
FT56.9452.0296.3251.03
SERAC99.4999.13100.0057.82
IKE100.0099.9869.1967.56
MEND94.2490.2797.0456.95
KN28.9528.4365.4337.18
ROME92.4587.0499.6357.47
MEMIT92.9485.9799.4960.64

We also present editing results of KnowEdit on LlaMA-2-7B using EasyEdit.

DataSetMetricSERACICEAdaLoRAMENDROMEMEMITFT-LFT-M
WikiData_recent
Edit Succ.98.6860.74100.0095.7597.1897.0555.75100.00
Portability63.5236.9364.6955.8855.2556.3740.8665.44
Locality100.0033.3456.4294.7654.7752.1543.7064.33
Fluency553.19531.01579.57557.11579.66573.89529.24574.32
ZsRE
Edit Succ.99.6766.01100.0096.7496.7795.3753.9399.98
Portability56.4863.9458.0360.4152.6352.6745.6460.31
Locality30.2323.1475.7692.7953.6748.3273.4289.78
Fluency410.89541.14563.56524.33573.75563.31493.01552.26
WikiBio
Edit Succ.99.6995.53100.0093.6696.0894.4066.33100.00
Locality69.7947.9081.2869.5162.7461.5179.8693.38
Fluency606.95632.92618.45609.39617.69616.65606.95612.69
WikiData_counterfact
Edit Succ.99.9969.83100.0080.0398.5798.0545.15100.00
Portability76.0745.3269.8952.0155.9258.5633.6074.36
Locality98.9632.3870.3194.3851.9746.6250.4876.76
Fluency549.91547.22580.29555.72584.04575.96528.26575.62
ConvSent
Edit Succ.62.7552.7844.8950.7645.7944.7549.5046.10
Locality0.2649.730.183.420.000.000.000.00
Fluency458.21621.45606.42379.43606.32602.62607.86592.52
Sanitation
Edit Succ.0.0072.502.500.0085.0048.750.0075.00
Locality100.0056.5865.505.2950.3167.4714.7847.07
Fluency416.29794.15330.44407.18465.12466.10439.10416.29

❗️❗️ Please note that if you wish to reproduce the results regarding Rome on Knowedi, ensure that fp16: False.

For the locality metric, we calculate the score based on the proportion of tokens that remain unchanged before and after editing. For example, if the output tokens before editing are [29, 234, 334] and after editing are [29, 234, 333], the locality score for this data would be 66.67. For the portability metric, we calculate it by taking the average of all sub-scores under the portability category.

<details><summary> <b> TO DO </b> </summary> In next version, we plan to:

Meanwhile, we will offer long-term maintenance to fix bugs, solve issues and meet new requests. So if you have any problems, please put issues to us.

</details>

Citation

Please cite our paper if you use EasyEdit in your work.


@article{zhang2024comprehensive,
  title={A Comprehensive Study of Knowledge Editing for Large Language Models},
  author={Zhang, Ningyu and Yao, Yunzhi and Tian, Bozhong and Wang, Peng and Deng, Shumin and Wang, Mengru and Xi, Zekun and Mao, Shengyu and Zhang, Jintian and Ni, Yuansheng and others},
  journal={arXiv preprint arXiv:2401.01286},
  year={2024}
}

@article{wang2023easyedit,
  title={Easyedit: An easy-to-use knowledge editing framework for large language models},
  author={Wang, Peng and Zhang, Ningyu and Xie, Xin and Yao, Yunzhi and Tian, Bozhong and Wang, Mengru and Xi, Zekun and Cheng, Siyuan and Liu, Kangwei and Zheng, Guozhou and others},
  journal={arXiv preprint arXiv:2308.07269},
  year={2023}
}

@article{yao2023editing,
  title={Editing Large Language Models: Problems, Methods, and Opportunities},
  author={Yao, Yunzhi and Wang, Peng and Tian, Bozhong and Cheng, Siyuan and Li, Zhoubo and Deng, Shumin and Chen, Huajun and Zhang, Ningyu},
  journal={arXiv preprint arXiv:2305.13172},
  year={2023}
}

@article{cheng2023edit,
  title={Can We Edit Multimodal Large Language Models?}, 
  author={Cheng, Siyuan and Tian, Bozhong and Liu, Qingbin and Chen, Xi and Wang, Yongheng and Chen, Huajun and Zhang, Ningyu},
  journal={arXiv preprint arXiv:2310.08475},
  year={2023}
}

@article{mao2023editing,
  title={Editing personality for llms},
  author={Mao, Shengyu and Zhang, Ningyu and Wang, Xiaohan and Wang, Mengru and Yao, Yunzhi and Jiang, Yong and Xie, Pengjun and Huang, Fei and Chen, Huajun},
  journal={arXiv preprint arXiv:2310.02168},
  year={2023}
}

@article{wang2024wise,
  title={WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models},
  author={Wang, Peng and Li, Zexi and Zhang, Ningyu and Xu, Ziwen and Yao, Yunzhi and Jiang, Yong and Xie, Pengjun and Huang, Fei and Chen, Huajun},
  journal={arXiv preprint arXiv:2405.14768},
  year={2024}
}

🎉Contributors

<a href="https://github.com/zjunlp/EasyEdit/graphs/contributors"> <img src="https://contrib.rocks/image?repo=zjunlp/EasyEdit" /> </a>

We thank all the contributors to this project, more contributors are welcome!

Other Related Projects

🙌 We would like to express our heartfelt gratitude for the contribution of FastEdit, ROME, GRACE, MELO, PMET to our project, as we have utilized portions of their source code in our project. Many thanks to all the colleagues in the community for submitting issues and providing technical support. Appreciation is also extended to all PR contributors, and issue feedback providers during the EasyEdit version iterations, especially ancelia06 for correcting the grammar of README.