Home

Awesome

AttrScore

Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"

img.png

What's New?

June 26, 2023: 1) Evaluation results of more models including GPT-4. 2) Thorough re-examination of the AttrEval-GenSearch dataset and correcting some annotation issues. Updated dataset released. 3) Training and evaluation code as well as model checkpoints released.

Dataset

We release our dataset (including training and two evaluation sets: AttrEval-Simulation and AttrEval-GenSearch) at: Huggingface datasets (more details can be found on the dataset page)

#loading dataset
from datasets import load_dataset

# training
attr_train = load_dataset("osunlp/AttrScore","combined_train")

# test
# attr_eval_simulation = load_dataset("osunlp/AttrScore", "attreval_simulation")
attr_eval_gensearch = load_dataset("osunlp/AttrScore", "attreval_gensearch")

Evaluation

We show our results for both prompting LLMs and fine-tuning LLMs on repurposed data from related tasks.

SimulationGenSearch
SettingModel (Size)Attr.Contra.Extra.OverallAttr.Contra.Extra.Overall
Zero-shotAlpaca (7B)50.04.01.433.650.78.63.634.3
Alpaca (13B)48.35.62.233.550.66.119.334.7
Vicuna (13B)46.38.321.634.654.413.326.141.4
ChatGPT45.717.952.743.261.220.653.355.0
GPT-458.723.261.555.687.345.089.685.1
Few-shotAlpaca (7B)45.48.29.631.949.65.213.537.2
Alpaca (13B)38.920.12.233.150.510.35.634.8
Vicuna (13B)35.437.20.332.650.69.18.434.1
ChatGPT46.627.635.839.262.626.849.553.3
GPT-461.131.368.860.085.253.388.984.3
Fine-tunedRoberta (330M)62.554.674.765.047.225.262.349.8
GPT2 (1.5B)63.654.671.963.551.118.660.747.4
T5 (770M)45.957.171.659.158.524.372.561.6
Flan-T5 (770M)57.350.170.559.364.327.672.964.5
Flan-T5 (3B)48.148.767.155.777.744.480.075.2
Flan-T5 (11B)48.449.966.555.481.638.976.972.7
LLaMA (7B)62.250.774.662.877.941.178.372.5
Alpaca (7B)66.841.176.864.573.030.280.072.5
Alpaca (13B)63.648.975.863.677.534.579.473.3
Vicuna (13B)66.249.178.666.069.437.779.972.1

Prompt LLMs (zero/few-shot)

We can prompt LLMs such as ChatGPT and GPT-4 to evaluate the attribution. The input is the evaluation task prompt, Claim (a concatenation of Query + Answer), and a Reference. For example,

Verify whether a given reference can support the claim. Options: Attributable, Extrapolatory or Contradictory. Attributable means the reference fully supports the claim, Extrapolatory means the reference lacks sufficient information to validate the claim, and Contradictory means the claim contradicts the information presented in the reference.

Claim: Who is the current CEO of Twitter? The current CEO of Twitter is Elon Musk

Reference: Elon Musk is the CEO of Twitter. Musk took over as CEO in October 2022 following a back-and-forth affair in which the billionaire proposed to purchase the social media company for $44 billion, tried to back out, and then ultimately went through with the acquisition. After becoming CEO, former CEO Parag Agrawal, CFO Ned Segal, and legal affairs and policy chief Vijaya Gadde were all dismissed from the company.

To replicate the number in the table for ChatGPT/GPT4, please copy your OpenAI API Key in "./api_key.txt" and then run the notebook example prompt_chatgpt_gpt4.ipynb

To prompt LLaMA/Alpaca/Vicuna, please see below for how to run inference on these models.

Fine-tune LMs

You can fine-tune any LMs on our repurposed datasets to evaluate the attribution.

Here, we give an example for fine-tuning LLaMA/Alpaca/Vicuna. You can use the --model_name_or_path with any LLaMA family models. We do full fine-tuning of LLaMA/Alpaca/Vicuna 7B/13B models with 4 A100 80GB GPUs.

torchrun --nproc_per_node 4 train_alpaca.py \
  --model_name_or_path chavinlo/alpaca-13b \
  --data_path osunlp/AttrScore \
  --train_subset 'combined_train' \
  --input_has_query True \
  --num_train_samples -1 \
  --bf16 True \
  --output_dir tmp/alpaca_13b_combined_train/ \
  --evaluation_strategy steps \
  --eval_steps 500 \
  --num_train_epochs 1 \
  --model_max_length 512 \
  --per_device_train_batch_size 2 \
  --per_device_eval_batch_size 2 \
  --gradient_accumulation_steps 8 \
  --save_strategy steps \
  --save_steps 5000 \
  --save_total_limit 1 \
  --learning_rate 2e-5 \
  --weight_decay 0. \
  --warmup_ratio 0.03 \
  --lr_scheduler_type cosine \
  --logging_steps 1 \
  --fsdp 'full_shard auto_wrap' \
  --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
  --tf32 True

You could also load our fine-tuned models to evaluate. We provide the following checkpoints we trained on the combined_train dataset in Huggingface Models:

For example,

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("osunlp/attrscore-flan-t5-xl")
model = AutoModelForSeq2SeqLM.from_pretrained("osunlp/attrscore-flan-t5-xl")
input = "As an Attribution Validator, your task is to verify whether a given reference can support the given claim. A claim can be either a plain sentence or a question followed by its answer. Specifically, your response should clearly indicate the relationship: Attributable, Contradictory or Extrapolatory. A contradictory error occurs when you can infer that the answer contradicts the fact presented in the context, while an extrapolatory error means that you cannot infer the correctness of the answer based on the information provided in the context. \n\nClaim: Who is the current CEO of Twitter? The current CEO of Twitter is Elon Musk \n Reference: Elon Musk is the CEO of Twitter. Musk took over as CEO in October 2022 following a back-and-forth affair in which the billionaire proposed to purchase the social media company for $44 billion, tried to back out, and then ultimately went through with the acquisition. After becoming CEO, former CEO Parag Agrawal, CFO Ned Segal, and legal affairs and policy chief Vijaya Gadde were all dismissed from the company."
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
output = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output) #'Attributable'

Or simply using the pipeline

from transformers import pipeline
model = pipeline("text2text-generation","osunlp/attrscore-flan-t5-xl")
input = "As an Attribution Validator, your task is to verify whether a given reference can support the given claim. A claim can be either a plain sentence or a question followed by its answer. Specifically, your response should clearly indicate the relationship: Attributable, Contradictory or Extrapolatory. A contradictory error occurs when you can infer that the answer contradicts the fact presented in the context, while an extrapolatory error means that you cannot infer the correctness of the answer based on the information provided in the context. \n\nClaim: Who is the current CEO of Twitter? The current CEO of Twitter is Elon Musk \n Reference: Elon Musk is the CEO of Twitter. Musk took over as CEO in October 2022 following a back-and-forth affair in which the billionaire proposed to purchase the social media company for $44 billion, tried to back out, and then ultimately went through with the acquisition. After becoming CEO, former CEO Parag Agrawal, CFO Ned Segal, and legal affairs and policy chief Vijaya Gadde were all dismissed from the company."
output = model(input)[0]['generated_text']
print(output) #'Attributable'

We show an inference and evaluation script for LLaMA-based models:

python inference_alpaca.py \
--model_name_or_path  osunlp/attrscore-vicuna-13b \
--test_data_path osunlp/AttrScore \
--subset_name attreval_simulation \
--model_max_length 512

Acknowledgement & Limitations

All the datasets in this project are intended for research purpose use only. We collect and annotate data for evaluation using publicly available information on the web, with the assistance of a generative search engine, New Bing. We acknowledge that LLMs have the potential to reproduce and amplify harmful information present in the data. We made an effort to mitigate this risk by carefully selecting our evaluation data and by conducting analyses to identify and mitigate potential risks in the process.

Our annotated evaluation set, AttrEval-GenSearch, is derived from New Bing, which uses GPT-4 as its backbone. It is crucial to note that we also use GPT-4 for evaluating attribution on AttrEval-GenSearch, which achieves the best performance with around 85% overall accuracy. Some bias might come from GPT-4 both generating the test examples and evaluating the attribution, which could potentially skew our understanding of the model's true performance. We therefore caution against over-optimism. We also acknowledge that the size of AttrEval-GenSearch is moderate, which may not fully represent the real use setting of attributed LLMs.

Besides, the Attreval-Simulation dataset still has gaps from the real scenario. The error patterns in this simulated dataset might be overly simplistic and lack diversity, which can limit the models' ability to effectively handle more complex and varied real-world errors. It is also worth noting that this simulated dataset may contain noise and erroneous labels, which could further impede the models' learning and subsequent performance. How to obtain higher-quality training data for attribution evaluation at scale can be a major focus for future development.

Citation Information

If you find this code or dataset useful, please consider citing our paper:

@article{yue2023automatic,
  title={Automatic Evaluation of Attribution by Large Language Models},
  author={Yue, Xiang and Wang, Boshi and Zhang, Kai and Chen, Ziru and Su, Yu and Sun, Huan},
  journal={arXiv preprint arXiv:2305.06311},
  year={2023}
}

Contact

Feel free to reach out if you have any questions. Xiang Yue, Yu Su, Huan Sun