Home

Awesome

FActScore

made-with-python arxiv PyPI version factscore Downloads

:warning: This is a fork of shmsw25/FActScore with three modifications:

  1. We add the functionality to use provided context documents directly, skipping the retrieval stage.
  2. We assume topic is not always available. When topic is not available, we start the prompt with "Answer the question based on the given context." instead of "Answer the question about {topic} based on the given context.".
  3. factscore.factscorer module now saves the results (including sample scores) if --result_save_path is set.

This is the official release accompanying our preprint, FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. FActScore is available as a PIP package as well.

If you find FActScore useful, please cite:

@article{ factscore,
    title={ {FActScore}: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation },
    author={ Min, Sewon and Krishna, Kalpesh and Lyu, Xinxi and Lewis, Mike and Yih, Wen-tau and Koh, Pang Wei and Iyyer, Mohit and Zettlemoyer, Luke and Hajishirzi, Hannaneh },
    year={ 2023 },
    journal={ arXiv preprint arXiv:2305.14251 },
    url={ https://arxiv.org/abs/2305.14251 }
}

Install

<!-- ``` conda create -n fs-env python=3.9 conda activate fs-env pip install -r requirements.txt ``` -->

Make a new Python 3.7+ environment using virtualenv or conda.

pip install --upgrade factscore
python -m spacy download en_core_web_sm

Download the data

python -m factscore.download_data --llama_7B_HF_path "llama-7B"

This command does the following.

  1. Download the knowledge source and example data.
  2. Take the LLAMA 7B model and reconstruct Inst-LLAMA. This requires having access to HuggingFace weights of the LLAMA-7B model, which are added to the --llama_7B_HF_path flag. Follow this guide in order to obtain those weights. Skip the --llama_7B_HF_path if you would only like to use the ChatGPT version of FActScore.

Optional flags:

Troubleshooting:

Running FActScore using a command line

We expect running FActScore costs about $1 of the API cost per 100 sentences. For instance, if you have 100 generations, each with 5 sentences on average, it costs $5 in total.

python -m factscore.factscorer --input_path {input_path} --model_name {estimator_name} --openai_key {openai_key}

Optional flags:

Additional flags added in this fork

This command uses the English Wikipedia from 2023/04/01 as a knowledge source. See this section to use your own database as a knowledge source!

To evaluate your own LM

There're two sets of prompt entities, data/labeled/prompt_entities.txt (183 entities) and data/unlabeled/prompt_entities.txt (500 entities). Each line contains the name of the person (which is also a corresponding Wikipedia title). You can use the labeled version if you want to be compatible with the data under data/labeled (Section 3 and Section 4.2 in the paper), and use the unlabeled version if you want to be compatible with the data under data/unlabeled (Section 4.3 in the paper).

You can prompt your LM with your own prompt (we used Question: Tell me a bio of <entity>.) and use the following code.

from factscore.factscorer import FactScorer

fs = FactScorer(openai_key="...")

# topics: list of strings (human entities used to generate bios)
# generations: list of strings (model generations)
out = fs.get_score(topics, generations)
print (out["score"]) # FActScore
print (out["respond_ratio"]) # % of responding (not abstaining from answering)
print (out["num_facts_per_response"]) # average number of atomic facts per response

Alternatively, you can create a .jsonl file, where each line has topic (entity name, exactly same as the one from .txt file) and output (generation from LM), and then use a command line above.

We recommend using (A) FactScorer(model_name="retrieval+ChatGPT") (default) or (B) FactScorer(model_name="retrieval+llama+npm"). They have 0.99 Pearson correlation. Here're results of a range of models, which you can easily reproduce through these command lines.

Model% respond# factsFActScore from (A)FActScore from (B)
GPT-488.260.873.159.9
ChatGPT84.237.071.660.4
Alpaca 65B100.017.155.646.3
InstructGPT99.827.752.841.7
Alpaca 13B100.016.647.740.3
Vicuna 13B76.650.946.640.7
Alpaca 7B100.017.439.736.5
Vicuna 7B91.045.638.936.9
MPT Chat 7B88.837.330.127.9
Oasst Pythia 12B100.039.725.120.8
Dolly 12B100.024.621.717.1
StableLM tuned 7B66.638.017.316.3

% respond (% of responding instead of abstaining from answering) and # facts (# of atomic facts per valid response) indicate "factual recall" (how many pieces of information the model gives) and FActScore indicates "factual precision" (how accurate each piece of information the model gives is).

To use a custom knowledge source

By default, FActScore uses Wikipedia dump from 2023/04/01. But you can also use your own knowledge source!

The knolwedge source should be ready in a .jsonl format, where each line is a dictionary containing title and text. text can either be a string or a list of strings (e.g., sections).

from factscore.factscorer import FactScorer

fs = FactScorer()

# this will create a database using your file
# for English Wikipedia (18GB)), it takes ~8 hours
# once DB file is created, you can reuse it by only specifying `db_path`
fs.register_knowledge_source(name_of_your_knowledge_source,
                             data_path=path_to_jsonl_file,
                             db_path=path_to_output_db_file)

# now, when you compute a score, specify knowledge source to use
out = fs.get_score(topics, generations, knowledge_source=name_of_your_knowledge_source)
print (out["score"]) # FActScore
print (out["respond_ratio"]) # % of responding (not abstaining from answering)
print (out["num_facts_per_response"]) # average number of atomic facts per response