Awesome
Rageval
Evaluation tools for Retrieval-augmented Generation (RAG) methods.
Rageval is a tool that helps you evaluate RAG system. The evaluation consists of six sub-tasks, including query rewriting, document ranking, information compression, evidence verify, answer generating, and result validating.
Definition of tasks and metrics
1. The generate task
The generate task is to answer the question based on the contexts provided by retrieval modules in RAG. Typically, the context could be extracted/generated text snippets from the compressor, or relevant documents from the re-ranker. Here, we divide metrics used in the generate task into two categories, namely answer correctness and answer groundedness.
(1) Answer Correctness: this category of metrics is to evaluate the correctness by comparing the generated answer with the groundtruth answer. Here are some commonly used metrics:
- Answer F1 Correctness: is widely used in the paper (Jiang et al.), the paper (Yu et al.), the paper (Xu et al.), and others.
- Answer NLI Correctness: also known as claim recall in the paper (Tianyu et al.).
- Answer EM Correctness: also known as Exact Match as used in the paper (Ivan Stelmakh et al.).
- Answer Bleu Score: also known as Bleu as used in the paper (Kishore Papineni et al.).
- Answer Ter Score: also known as Translation Edit Rate as used in the paper (Snover et al.).
- Answer chrF Score: also known as character n-gram F-score as used in the paper (Popovic et al.).
- Answer Disambig-F1: also known as Disambig-F1 as used in the paper (Ivan Stelmakh et al.) and the paper (Zhengbao Jiang et al.).
- Answer Rouge Correctness: also known as Rouge as used in the paper (Chin-Yew Lin).
- Answer Accuracy: also known as Accuracy as used in the paper (Dan Hendrycks et al.).
- Answer LCS Ratio: also know as LCS(%) as used in the paper (Nashid et al.).
- Answer Edit Distance: also know as Edit distance as used in the paper (Nashid et al.).
(2) Answer Groundedness: this category of metrics is to evaluate the groundedness (also known as factual consistency) by comparing the generated answer with the provided contexts. Here are some commonly used metrics:
- Answer Citation Precision: also known as citation precision in the paper (Tianyu et al.).
- Answer Citation Recall: also known as citation recall in the paper (Tianyu et al.).
- Context Reject Rate: also known as reject rate in the paper (Wenhao Yu et al.).
2. The rewrite task
The rewrite task is to reformulate user question into a set of queries, making them more friendly to the search module in RAG.
3. The search task
The search task is to retrieve relevant documents from the knowledge base.
(1) Context Adequacy: this category of metrics is to evaluate the adequacy by comparing the retrieved documents with the groundtruth contexts. Here are some commonly used metrics:
(2) Context Relevance: this category of metrics is to evaluate the relevance by comparing the retrieved documents with the groundtruth answers. Here are some commonly used metrics:
- Context Recall: also known as Context Recall in RAGAS framework.
Setup Evaluator LLMs
Some metrics evaluations rely on LLMs as evaluators. You can either directly call OpenAI's API or deploy an open-source model as a RESTful API in the OpenAI format for evaluation.
- OpenAI
os.environ["OPENAI_API_KEY"] = "<your_openai_api_key>"
- Open source LLMs
Please use vllm to setup the API server for open source LLMs. For example, use the following command to deploy a Llama-3-8B model hosted on HuggingFace:
python -m vllm.entrypoints.openai.api_server \
--model meta-llama/Meta-Llama-3-8B-Instruct \
--tensor-parallel-size 8 \
--dtype auto \
--api-key sk-123456789 \
--gpu-memory-utilization 0.9 \
--port 5000
Benchmark Results
1. ASQA benchmark
ASQA dataset is a question-answering dataset that contains factoid questions and long-form answers. The benchmark evaluates the correctness of the answer in the dataset. All detailed results can be download from this repo. Besides, these results can be reproduced based on the script in this repo.
<table> <col width=166> <col width=125> <col width=125 span=4> <tr> <td rowspan=2 align="center"><b>Model</b></td> <td rowspan=2 align="center"><b>Retriever</b></td> <td colspan=4 align="center"><b>Metric</b></td> </tr> <tr> <td align="center"><a href="rageval\metrics\_answer_exact_match.py">String EM</a></td> <td align="center"><a href="rageval\metrics\_answer_rouge_correctness.py">Rouge L</a></td> <td align="center"><a href="rageval\metrics\_answer_disambig_f1.py">Disambig F1</a></td> <td align="center"><a href="benchmarks\ASQA\asqa_benchmark.py">D-R Score</a></td> </tr> <tr> <td>gpt-3.5-turbo-instruct</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/asqa/gpt_3.5_turbo_instruct">no-retrieval</a></td> <td align="center">33.8</td> <td align="center">30.2</td> <td align="center">30.7</td> <td align="center">30.5</td> </tr> <tr> <td>mistral-7b</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/asqa/mistral_7b">no-retrieval</a></td> <td align="center">20.6</td> <td align="center">31.1</td> <td align="center">26.6</td> <td align="center">28.7</td> </tr> <tr> <td>llama2-7b-chat</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/asqa/llama2_7b_chat">no-retrieval</a></td> <td align="center">21.7</td> <td align="center">30.7</td> <td align="center">28.0</td> <td align="center">29.3</td> </tr> <tr> <td>llama3-8b-base</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/asqa/llama3_8b_base">no-retrieval</a></td> <td align="center">25.7</td> <td align="center">31.0</td> <td align="center">28.4</td> <td align="center">29.7</td> </tr> <tr> <td>llama3-8b-instruct</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/asqa/llama3_8b_instruct">no-retrieval</a></td> <td align="center">27.1</td> <td align="center">30.9</td> <td align="center">29.4</td> <td align="center">30.1</td> </tr> <tr> <td>solar-10.7b-instruct</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/asqa/solar_10.7b_instruct">no-retrieval</a></td> <td align="center">23.0</td> <td align="center">24.9</td> <td align="center">28.1</td> <td align="center">26.5</td> </tr> </table>2. ALCE Benchmark
ALCE is a benchmark for Automatic LLMs' Citation Evaluation. ALCE contains three datasets: ASQA, QAMPARI, and ELI5. All detailed results can be download from this repo. Besides, these results can be reproduced based on the script in this repo.
For more evaluation results, please view the benchmark's README: ALCE-ASQA and ALCE-ELI5.
<table> <col width=75> <col width=125> <col width=85> <col width=145> <col width=125 span=5> <tr> <td rowspan=2 align="center"><b>Dataset</b></td> <td rowspan=2 align="center"><b>Model</b></td> <td colspan=2 align="center"><b>Method</b></td> <td colspan=5 align="center"><b>Metric</b></td> </tr> <tr> <td align="center">retriever</td> <td align="center">prompt</td> <td align="center">MAUVE</td> <td align="center"><a href="rageval\metrics\_answer_exact_match.py">EM Recall</a></td> <td align="center"><a href="rageval\metrics\_answer_claim_recall.py">Claim Recall</a></td> <td align="center"><a href="rageval\metrics\_answer_citation_recall.py">Citation Recall</a></td> <td align="center"><a href="rageval\metrics\_answer_citation_precision.py">Citation Precision</a></td> </tr> <tr> <!-- <td rowspan=7><a href="benchmarks/ALCE/ASQA/README.md">ASQA</a></td> <td rowspan=7>llama2-7b-chat</td> <td rowspan=5>GTR</td> --> <td rowspan=3 style="text-align:left;padding-left:10px"><a href="benchmarks/ALCE/ASQA/README.md">ASQA</a></td> <td rowspan=3>llama2-7b-chat</td> <td rowspan=1>GTR</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/alce_asqa_gtr">vanilla(5-psg)</a></td> <td align="center">-</td> <td align="center">33.3</td> <td align="center">-</td> <td align="center">55.9</td> <td align="center">80.0</td> </tr> <!-- <tr> <td>summary(5-psg)</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> </tr> <tr> <td>summary(10-psg)</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> </tr> <tr> <td>snippet(5-psg)</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> </tr> <tr> <td>snippet(10-psg)</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> </tr> --> <tr> <td>DPR</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/alce_asqa_dpr">vanilla(5-psg)</a></td> <td align="center">-</td> <td align="center">29.2</td> <td align="center">-</td> <td align="center">49.2</td> <td align="center">81.0</td> </tr> <tr> <td>Oracle</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/alce_asqa_oracle">vanilla(5-psg)</a></td> <td align="center">-</td> <td align="center">41.7</td> <td align="center">-</td> <td align="center">58.1</td> <td align="center">78.9</td> </tr> <tr> <!-- <td rowspan=6><a href="benchmarks/ALCE/ELI5/README.md">ELI5</a></td> <td rowspan=6>llama2-7b-chat</td> <td rowspan=5>BM25</td> --> <td rowspan=3><a href="benchmarks/ALCE/ELI5/README.md">ELI5</a></td> <td rowspan=3>llama2-7b-chat</td> <td rowspan=1>BM25</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/alce_eli5_bm25">vanilla(5-psg)</a></td> <td align="center">-</td> <td align="center">-</td> <td align="center">11.5</td> <td align="center">26.6</td> <td align="center">74.5</td> </tr> <!-- <tr> <td>summary(5-psg)</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> </tr> <tr> <td>summary(10-psg)</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> </tr> <tr> <td>snippet(5-psg)</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> </tr> <tr> <td>snippet(10-psg)</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> </tr> --> <tr> <td>Oracle</td> <td><a href="https://huggingface.co/datasets/golaxy/rag-bench/viewer/alce_eli5_oracle">vanilla(5-psg)</a></td> <td align="center">-</td> <td align="center">-</td> <td align="center">17.8</td> <td align="center">34.0</td> <td align="center">75.6</td> </tr> </table>Installation
git clone https://github.com/gomate-community/rageval.git
cd rageval
python setup.py install
Usage
1. Metric
Take F1 as an example.
from datasets import Dataset
import rageval as rl
sample = {
"answers": [
"Democrat rick kriseman won the 2016 mayoral election, while re- publican former mayor rick baker did so in the 2017 mayoral election."
],
"gt_answers": [
[
"Kriseman",
"Rick Kriseman"
]
]
}
dataset = Dataset.from_dict(sample)
metric = rl.metrics.AnswerF1Correctness()
score, dataset = metric.compute(dataset)
2. Benchmark
Benchmarks can be run directly using scripts (Take ALCE-ELI5 as an example).
bash benchmarks/ALCE/ELI5/run.sh
Contribution
Please make sure to read the Contributing Guide before creating a pull request.
About
This project is currently at its preliminary stage.