Home

Awesome

<div align=center> <img src="image/detectrl-svg.svg" width="200px"> </div> <h2 align="center"> <a href="https://arxiv.org/abs/2410.23746">[NeurIPS D&B 2024] DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios </a></h2> <h5 align="center">

If you like our project, please give us a star ⭐ on GitHub for the latest update. </h2>

</h5> <div align=center>

arXiv Home Page

This repository is the official implementation of DetectRL, a benchmark for real-world LLM-generated text detection, covers multiple realistic scenarios, including usage of various prompts, human revisions of LLM-generated text, adversarial spelling errors, taking measures to attack detectors, etc., provide real utility to researchers on the topic and practitioners looking for consistent evaluation methods.

</div>

πŸ“£ News

🧐 Overview

<img src="image/detectrl-framework.png" width="1000px">

Previous and current popular detection benchmarks, such as TuringBench , MGTBench, MULTITuDE, MAGE and M4, have primarily focused on evaluating detectors' performance across various domains, generative models, and languages by constructing idealized test data. However, they have overlooked the assessment of detectors' capabilities in more common scenarios encountered in practical applications, such as various prompt usages and human revisions, as shown in following table.

Benchmark ↓ Eval β†’Multi DomainsMulti LLMsVarious PromptsHuman RevisionWriting ErrorsData MixingDetector GeneralizationTraining LengthTest LengthReal World Human Writing
TuringBenchβœ“βœ“--------
MGTBenchβœ“βœ“-⃝⃝-β–³-β–³-
MULTITuDEβœ“βœ“----β–³---
M4βœ“βœ“βœ“---βœ“---
MAGEβœ“βœ“-⃝--βœ“---
DetectRL (Ours)βœ“βœ“βœ“βœ“βœ“βœ“βœ“βœ“βœ“βœ“

❗Note: Comparison with existing benchmarks. $\checkmark$: benchmark evaluates this scenario. $\triangle$: has studies, not in evaluation. $\bigcirc$: similar scenario exist, but not fully based on real-world usage.

πŸ† LeaderBoard

Tasks Settings β†’Multi-DomainMulti-LLMMulti-AttackDomain-GeneralizationLLM-GeneralizationAttack-GeneralizationTrain-TimeTest-TimeHuman WritingAVG.
Detectors ↓AUROCF1AUROCF1AUROCF1F1F1F1F1F1AUROCF1
Rob-Base99.9899.7599.9399.5899.5697.6683.0091.8192.3779.9974.0097.3494.3193.02
Rob-Large99.7898.8795.1690.0399.8799.0377.2082.8583.9686.0885.2396.6894.6391.49
X-Rob-Base99.9299.3499.1498.1798.4996.0775.9792.7390.5884.2573.8393.4390.2991.71
X-Rob-Large99.0197.4497.4093.4799.3197.7576.1485.8973.4286.3579.8397.2194.4390.59
Binoculars83.9578.2583.3074.8385.0578.5377.4774.1074.7073.8274.3490.6885.9879.61
Revise-Detect.67.2460.8266.3653.7270.8957.2454.5053.2850.6365.7167.9683.2982.1664.13
Log-Rank64.4357.5363.7554.1868.5255.1555.1052.7851.2857.4459.7488.4683.8562.48
LRR65.4755.4564.9353.0168.5357.9954.6152.7357.4157.0958.1585.9980.5662.46
Log-Likelihood63.7156.3662.9753.1367.9754.3853.3751.7750.7357.9259.2888.4883.7561.83
DNA-GPT64.9255.8364.3651.0968.3653.3651.5147.0941.9857.6362.4387.8082.7760.70
Fast-DetectGPT58.5248.0759.5846.5560.7050.6348.3536.5649.4761.3155.0876.0368.4755.33
Rank51.3444.9750.3342.0657.0848.8342.6141.4938.8441.6746.6583.8680.0051.52
NPR48.3741.4147.2740.0453.4945.2238.5838.8336.1037.6042.1780.0375.9848.08
DetectGPT34.4321.5234.9314.8036.1919.1511.5413.1111.8435.7834.6960.8648.7629.05
Entropy46.0227.4046.9734.2543.7524.6925.0631.0716.5313.3815.9922.3916.6028.01

βš™οΈ Data and Experimental Reproduction

Data Loading and Processing

# loading original dataset and sampling
sh load_dataset.sh

Data Generation and Benchmark Construction

# data generation
sh data_generation.sh

# benchmark construction
sh benchmark_construction.sh

Benchmark Evaluation

# Task1 and Task2 evaluation
sh domains_evaluation.sh
sh llms_evaluation.sh
sh attacks_evaluation.sh

# Task3 evaluation
sh varying_length_evaluation.sh

# Task4 evaluation
sh human_writing_evaluation.sh

✏️ Citation

If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.

@article{wu2024detectrl,
  title={DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios},
  author={Wu, Junchao and Zhan, Runzhe and Wong, Derek F and Yang, Shu and Yang, Xinyi and Yuan, Yulin and Chao, Lidia S},
  journal={arXiv preprint arXiv:2410.23746},
  year={2024}
}