Home

Awesome

<p align="center"> πŸ€— <a href="https://huggingface.co/datasets/Infinigence/LVEval" target="_blank">HF Repo</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2402.05136" target="_blank">Paper</a> </p>

ι˜…θ―»δΈ­ζ–‡η‰ˆζœ¬γ€‚

LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K

LV-Eval is a challenging long-context benchmark with five length levels (16k, 32k, 64k, 128k, and 256k) reaching up to 256k words. The average number of words is 102,380, and the Min/Max number of words is 11,896/387,406. LV-Eval features two main tasks, single-hop QA and multi-hop QA, comprising 11 bilingual datasets. The design of LV-Eval has incorporated three key techniques, namely confusing facts insertion (CFI), keyword and phrase replacement (KPR), and keyword-recall-based metrics (AK, short for metics with Answer Keywords and word blacklist) design, which jointly provide a challenging, mitigated-knowledge-leakege, and more accurate evaluation of the long-context capability of LLMs. We anticipate that LV-Eval will serve as a valuable resource for supporting future research on long-context LLMs.

Key Characteristics

Overview of LV-Eval

In the following tables, CFI is short for Confusiong Facts Insertion, KPR is short for Keyword and Phrase Replacement, and AK is short for Answer Keywords used in keyword-recall-based metrics.

Single-hop QA

In a single-hop QA task, only a single evidence in the context is needed to derive the answer.

DatasetCFI#KPRAKLanguage#QA pairs#Contexts
loogle-SD-mixupβœ”en160800
cmrc-mixup786zh2001,000
multifieldqa-en-mixupβœ”476βœ”en101505
multifieldqa-zh-mixupβœ”424βœ”zh133665
factrecall-enβœ”3βœ”en1200 * 5
factrecall-zhβœ”3βœ”zh1200 * 5

factrecall-en and factrecall-zh are designed for presure test of "needle in haystack", so the qa pair is kept the same across all data instances.

Multi-hop QA

In multi-hop QA tasks, the reasoning to derive the answer needs to gather multiple pieces of information from various locations in the context.

DatasetCFI#KPRAKLanguage#QA pairs#Contexts
dureader-mixupzh176880
loogle-CR-mixupβœ”en99495
loogle-MR-mixupβœ”en139695
hotpotwikiqa-mixupβœ”232βœ”en124620
lic-mixupβœ”βœ”zh197985

Table of Contents

<a name="leaderboard"></a>

Leaderboard

Here is the average scores (%) over all tasks on 5 length levels. We evaluate 2 commercial LLMs an 8 open-source LLMs.

Evaluated LLMs

Model NameSFTContext LengthHuggingFace / API Endpoint
Llama2-7B-Chat-hfβœ”$4k$meta-llama/Llama-2-7b-chat-hf
Qwen-7B-8k-Chatβœ”$8k$Qwen/Qwen-7B-Chat
Vicuna-7B-16k-v1.5βœ”$16k$lmsys/vicuna-7b-v1.5-16k
ChatGLM3-6B-32kβœ”$32k$THUDM/chatglm3-6b-32k
Llama2-7B-32k-Instructβœ”$32k$togethercomputer/Llama-2-7B-32K-Instruct
BlueLM-7B-32k-Chatβœ”$32k$vivo-ai/BlueLM-7B-Chat-32K
LongChat-7B-32k-v1.5βœ”$32k$lmsys/longchat-7b-v1.5-32k
Yi-6B-200k$200k$01-ai/Yi-6B-200K
GPT-4-8kβœ”$8k$gpt-4-0613
GPT-3.5-16kβœ”$16k$gpt-3.5-turbo-1106

Overall Result

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k30.7026.6217.6211.567.17
BlueLM-7B-32k-Chat24.0916.809.226.514.77
Yi-6B-200k13.7311.959.828.245.28
LongChat-7B-32k-v1.513.5410.706.805.354.22
Llama2-7B-32k-Instruct13.6610.076.034.432.87
Qwen-7B-8k-Chat7.904.863.883.002.71
Vicuna-7B-16k-v1.55.773.902.622.071.92
Llama2-7B-Chat-hf4.182.191.811.451.10
GPT-3.5-16k14.098.194.943.212.23
GPT-4-8k18.2710.606.844.082.54

<a name="evaluate-your-llms-on-lv-eval"></a>

Evaluate Your LLMs on LV-Eval

Load Data

from datasets import load_dataset

DATASET_NAMES = [
    "hotpotwikiqa_mixup", "loogle_SD_mixup", "loogle_CR_mixup", "loogle_MIR_mixup", \
    "multifieldqa_en_mixup", "multifieldqa_zh_mixup", "factrecall_en", "factrecall_zh", \
    "cmrc_mixup", "lic_mixup", "dureader_mixup"
]

DATASET_LENGTH_LEVEL = [
    '16k', '32k', '64k', '128k', '256k'
]

def get_dataset_names(dataset_names, length_levels):
    datasets = []
    for name in dataset_names:
        for length in length_levels:
            datasets.append(f"{name}_{length}")
    return datasets

for dataset in get_dataset_names(DATASET_NAMES, DATASET_LENGTH_LEVEL):
    data = load_dataset("Infinigence/LVEval", dataset, split='test', token=True)

Alternatively, you can download datas to your local folder from the following link: https://huggingface.co/datasets/Infinigence/LVEval/resolve/main/{task_name}.zip

remember to replace {task_name} with the name of the subset you want.

For example, if you want to download the data for hotpotwikiqa_mixup, you can visit this link: https://huggingface.co/datasets/Infinigence/LVEval/resolve/main/hotpotwikiqa_mixup.zip

Data Format

All data in LV-Eval follows the following format.

{
    "input": "The input/command for the task, usually short, such as questions in QA, queries in Few-shot tasks, etc",
    "context": "The documents input into the long-text task.",
    "answers": "A List of all true answers",
    "length": "Total length of the first three items (counted in characters for Chinese and words for English)",
    "dataset": "The name of the dataset to which this piece of data belongs",
    "language": "The language of this piece of data",
    "answer_keywords": "The key words or sentences manually filtered from the answers",
    "confusing_facts": "This key represents confusing facts inserted to context to make the evaluation more challenging.",
}

Evaluation

Install the requirements with pip: pip install -r requirements.txt.

Generally, we run evaluation in data parrallel mode. We need to select model_path, model_name(Modify this to make it compatible with the names defined in the build_chat function in utils.py for customized prompt format needs) and model_max_length(-500 to reserve output window) sequeentially in the shell scripts. For example:

bash batch_eval_multiple.sh /home/user/workspace/public_models/chatglm3-6b-32k chatglm3 31500

For models with extra long context windows or exceeding model size, we suggest to run evaluation in HF auto model parrallel mode. For example:

bash batch_eval_single.sh /home/user/workspace/public_models/Yi-6B-200K yi-200k 199500

We can also run evaluation step by step. Firstly, run prediction.py to get inference results. We need to select model via --model-path, define model name via --model-name, input model max length via --model-max-len, and define output directory via --output-dir. For example:

python prediction.py --model-path /home/user/workspace/public_models/chatglm3-6b-32k --model-name chatglm3 --model-max-len 31500 --output-dir ./outputs/

The prediction results will be saved in [output dir]/[model name]. Then, we can run evaluation.py on prediction results we obtained before, to get the evaluation results of LV-Eval. The prediction results directory need to be defined via --input-dir. For example:

python evaluation.py --input-dir ./outputs/chatglm3/

After that, we will see evaluation results printed in shell, and get results.json, results.csv file in output directory.

The cusetome needs can be defined in config.py (for selecting the datasets and length levels we want to evaluate) and utils.py (for customize the prompt format of our models).

Additionally, we evaluate some commercial models with API through the following scipts. For example, evaluate OpenAI's GPT series, we need to select model_name and model_max_length. Note the OPENAI_API_KEY need to be set before evaluation.

bash batch_eval_gpt_single.sh gpt-4-1106-preview 127500

<a name="detail-result-on-each-dataset"></a>

Detail Result on Each Dataset

Average scores over all length levels on each dataset.

Single-hop QA

Model Nameloogle-SD-mixupcmrc-mixupmultifieldqa-en-mixupmultifieldqa-zh-mixupfactrecall-enfactrecall-zh
ChatGLM3-6B-32k22.2928.1612.9318.9952.606.10
BlueLM-7B-32k-Chat13.0217.537.3211.4924.0318.80
Yi-6B-200k29.171.277.751.8422.2813.95
LongChat-7B-32k-v1.514.569.656.955.869.144.28
Llama2-7B-32k-Instruct7.636.124.632.5638.090.92
Qwen-7B-8k-Chat4.785.814.524.570.805.45
Vicuna-7B-16k-v1.54.686.043.442.890.090
Llama2-7B-Chat-hf3.041.973.991.480.450
GPT-3.5-16k13.995.169.788.512.875.28
GPT-4-8k11.135.9610.167.299.2511.39

Multi-hop QA

Model Namedureader-mixuploogle-CR-mixuploogle-MR-mixuphotpotwikiqa-mixuplic-mixup
ChatGLM3-6B-32k19.5710.179.1011.1515.02
BlueLM-7B-32k-Chat14.615.042.8711.229.11
Yi-6B-200k2.835.824.4112.426.12
LongChat-7B-32k-v1.510.348.596.036.986.92
Llama2-7B-32k-Instruct9.572.511.922.315.27
Qwen-7B-8k-Chat10.423.142.702.234.77
Vicuna-7B-16k-v1.57.183.262.311.954.00
Llama2-7B-Chat-hf5.492.621.801.741.02
GPT-3.5-16k4.876.095.875.883.53
GPT-4-8k12.077.265.917.465.28

Scores of each length levels on each dataset.

loogle-SD-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k41.8230.3119.0711.348.92
BlueLM-7B-32k-Chat34.3415.104.955.325.41
Yi-6B-200k39.5636.4831.7125.7112.37
LongChat-7B-32k-v1.527.4218.2112.099.115.97
Llama2-7B-32k-Instruct13.9410.585.534.803.30
Qwen-7B-8k-Chat10.544.702.403.253.02
Vicuna-7B-16k-v1.58.794.903.074.242.39
Llama2-7B-Chat-hf6.752.612.582.041.24
GPT-3.5-16k31.6718.5610.415.743.56
GPT-4-8k27.0114.018.005.141.48

cmrc-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k51.2146.3420.7114.168.38
BlueLM-7B-32k-Chat45.8919.5310.667.064.51
Yi-6B-200k1.050.350.841.582.54
LongChat-7B-32k-v1.520.9910.778.973.773.75
Llama2-7B-32k-Instruct13.867.314.102.952.40
Qwen-7B-8k-Chat11.135.324.683.814.09
Vicuna-7B-16k-v1.511.756.555.042.754.13
Llama2-7B-Chat-hf3.851.081.721.641.54
GPT-3.5-16k12.196.003.572.731.32
GPT-4-8k14.673.335.313.812.68

multifieldqa-en-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k25.4012.7812.329.894.24
BlueLM-7B-32k-Chat11.826.348.385.294.78
Yi-6B-200k10.019.248.835.984.69
LongChat-7B-32k-v1.512.027.587.843.114.22
Llama2-7B-32k-Instruct8.034.964.123.902.13
Qwen-7B-8k-Chat7.663.615.233.642.44
Vicuna-7B-16k-v1.56.294.322.792.511.28
Llama2-7B-Chat-hf8.815.551.582.541.49
GPT-3.5-16k18.7811.597.387.953.21
GPT-4-8k19.0012.698.307.253.54

multifieldqa-zh-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k32.3824.4820.9710.007.05
BlueLM-7B-32k-Chat22.0517.647.365.904.48
Yi-6B-200k2.850.751.892.111.58
LongChat-7B-32k-v1.59.818.823.233.543.92
Llama2-7B-32k-Instruct4.553.931.451.741.15
Qwen-7B-8k-Chat8.825.683.012.842.52
Vicuna-7B-16k-v1.55.824.452.030.881.26
Llama2-7B-Chat-hf4.721.210.680.240.56
GPT-3.5-16k18.9412.216.292.942.15
GPT-4-8k17.6111.184.991.760.92

factrecall-en

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k91.5089.0046.0024.0012.5
BlueLM-7B-32k-Chat58.5032.1715.509.005.00
Yi-6B-200k24.8823.0924.9622.0416.44
LongChat-7B-32k-v1.59.2214.338.317.866.00
Llama2-7B-32k-Instruct75.2056.0033.0017.858.40
Qwen-7B-8k-Chat1.771.120.710.180.22
Vicuna-7B-16k-v1.50000.250.20
Llama2-7B-Chat-hf1.080.460.310.230.15
GPT-3.5-16k8.253.271.800.600.45
GPT-4-8k23.4011.845.214.031.79

factrecall-zh

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k02.0012.509.007.00
BlueLM-7B-32k-Chat19.0037.0020.0012.505.50
Yi-6B-200k25.7316.8612.4110.134.62
LongChat-7B-32k-v1.57.205.003.503.702.00
Llama2-7B-32k-Instruct2.550.740.530.490.29
Qwen-7B-8k-Chat15.756.003.501.500.50
Vicuna-7B-16k-v1.500000
Llama2-7B-Chat-hf00000
GPT-3.5-16k14.516.702.491.720.98
GPT-4-8k28.0315.248.083.582.00

dureader-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k23.9925.2122.0117.948.72
BlueLM-7B-32k-Chat19.4019.7414.4410.958.51
Yi-6B-200k2.872.982.882.363.06
LongChat-7B-32k-v1.513.4411.579.239.517.96
Llama2-7B-32k-Instruct11.8210.658.589.347.48
Qwen-7B-8k-Chat12.0012.8010.488.158.65
Vicuna-7B-16k-v1.59.677.656.626.255.70
Llama2-7B-Chat-hf7.215.425.594.784.45
GPT-3.5-16k8.015.264.263.303.50
GPT-4-8k19.1413.6412.668.196.71

loogle-CR-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k14.4114.109.926.955.46
BlueLM-7B-32k-Chat9.017.363.812.402.60
Yi-6B-200k8.258.834.734.053.23
LongChat-7B-32k-v1.511.2511.179.316.195.03
Llama2-7B-32k-Instruct3.112.822.012.462.16
Qwen-7B-8k-Chat5.483.303.821.141.94
Vicuna-7B-16k-v1.55.004.253.761.991.28
Llama2-7B-Chat-hf3.693.293.132.190.81
GPT-3.5-16k10.048.395.583.083.37
GPT-4-8k12.6810.406.482.833.91

loogle-MR-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k15.8311.627.007.243.82
BlueLM-7B-32k-Chat4.903.141.682.462.19
Yi-6B-200k6.947.672.693.441.32
LongChat-7B-32k-v1.510.539.513.044.053.01
Llama2-7B-32k-Instruct3.122.611.441.470.95
Qwen-7B-8k-Chat4.932.952.371.801.46
Vicuna-7B-16k-v1.55.173.830.960.551.06
Llama2-7B-Chat-hf3.372.202.051.040.33
GPT-3.5-16k12.957.036.232.131.00
GPT-4-8k12.247.836.262.300.90

hotpotwikiqa-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k16.9814.769.028.316.68
BlueLM-7B-32k-Chat19.3114.079.637.715.40
Yi-6B-200k23.5518.949.947.662.01
LongChat-7B-32k-v1.511.5710.714.775.492.37
Llama2-7B-32k-Instruct3.542.312.201.861.62
Qwen-7B-8k-Chat2.781.892.272.371.82
Vicuna-7B-16k-v1.52.632.192.051.041.85
Llama2-7B-Chat-hf3.991.301.840.810.75
GPT-3.5-16k11.966.663.274.233.30
GPT-4-8k13.5110.626.674.132.36

lic-mixup

Model Name$16k$$32k$$64k$$128k$$256k$
ChatGLM3-6B-32k24.1522.2714.338.306.07
BlueLM-7B-32k-Chat20.7512.685.003.034.11
Yi-6B-200k5.376.257.195.566.24
LongChat-7B-32k-v1.515.4510.024.542.472.14
Llama2-7B-32k-Instruct10.558.873.411.851.66
Qwen-7B-8k-Chat6.056.074.214.343.19
Vicuna-7B-16k-v1.58.344.812.522.361.99
Llama2-7B-Chat-hf2.480.990.480.420.73
GPT-3.5-16k7.654.423.070.871.65
GPT-4-8k13.695.863.231.901.70

<a name="license"></a>

License

In LV-Eval, the cmrc-mixup and lic-mixup datasets follow CC-BY-SA-4.0 license, and the other datasets follow MIT license.

<a name="citation"></a>

Citation

@misc{yuan2024lveval,
      title={LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K}, 
      author={Tao Yuan and Xuefei Ning and Dong Zhou and Zhijie Yang and Shiyao Li and Minghui Zhuang and Zheyue Tan and Zhuyu Yao and Dahua Lin and Boxun Li and Guohao Dai and Shengen Yan and Yu Wang},
      year={2024},
      eprint={2402.05136},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}