Awesome
ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models
If you are confused by any of the following contents or you have any suggestion, please contact us at argugpt@163.com. 【如果有任何疑问以及建议,欢迎通过邮箱联系我们。】
In this repo, you will see 【内容如下】:
- A corpus of 4k argumentative essays written by 7 GPT models. 【我们用7个GPT模型生成了4k议论文。】
- Several classifiers for detecting GPT-generated argumentative essays (99% accuracy at doc-level; 93% at sentence-level). 【训练了相关模型检测议论文是否由GPT模型生成(文章检测达到99%的准确率,句子检测达到93%的准确率)。】
And we make following resources public 【我们公布了以下资源】:
- Models: We fine-tuned RoBERTa to detect machine-generated argumentative essays. We trained both doc-level and sentence-level classifiers. 【我们微调了RoBERTa以检测机器生成的议论文,我们在文档级别和句子级别上都进行了训练。】
- App-demo: We built an app in huggingface space which bases on the sentence-level detector. 【基于句子级别的检测模型,我们搭建了一个App,可以在网页中直接使用。】
- Paper: We released our research paper in arxiv. 【我们的论文预印在了arxiv中。】
Introduction to ArguGPT corpus 语料清单
We compiled a 8k human-machine (4k human v.s. 4k machine) comparison argumentative essays. We also collected a 1k dataset for the out-of-distribution (OOD) test experiment, in which the essays are either generated by ood models or prompted by ood writing tasks. Sub-corpora are listed in the following table. 【ArguGPT包含了8k议论文文章,包括4k人类议论文和4k机器生成议论文。我们也另外收集了1k议论文,作为OOD测试集以验证分类器泛化能力(其中的议论文使用的作文题或生成式模型与ArguGPT中的不同),语料清单如下。】
sub-corpus | # essays | # tokens | mean len | source | access |
---|---|---|---|---|---|
WECCL-human | 1,845 | 450,657 | 244 | SWECCL 2.0 | SWECCL (Wen & Wang, 2008) |
WECCL-machine | 1,813 | 442,531 | 244 | GPT models | Released. See in data/argugpt folder |
TOEFL-human | 1,680 | 503,504 | 299 | TOEFL11 | Purchased in LDC |
TOEFL-machine | 1,635 | 442,963 | 270 | GPT models | Released. See in data/argugpt folder |
GRE-human | 590 | 341,495 | 578 | GRE-prep materials | No copyright to release |
GRE-machine | 590 | 268,640 | 455 | GPT models | Released. See in data/argugpt folder |
OOD-human | 500 | 132,902 | 265 | CLEC | CLEC (Gui & Yang, 2003) |
OOD-machine | 500 | 180,120 | 360 | ChatGPT & four OOD models | Released. See in data/argugpt folder |
Note that four OOD models are: gpt-4
, claude-instant
, bloomz-7b
, and flan-t5-11b
. More detailed information about the ArguGPT corpus can be seen in our paper. 【有关ArguGPT的更多信息,请参考我们的论文。】
Data split and baseline 数据划分及基准模型
We first split the data into train/dev/test sets. The test split of TOEFL essays are as well evaluated by human participants in the Turing test. Then we established baselines by training detectors based on SVMs and RoBERTa. Moreover, we conducted ablation study to see the effect of reducing training data points. Finally, we evaluated two detectors on our own test set, namely GPTZero and RoBERTa trained by Guo et al. (2023). 【我们将数据集进行了划分。其中,图灵测试(人类测评)中使用的数据为测试集中的托福作文。在此数据集上,我们训练了SVM和RoBERTa模型,作为该数据集的基准模型。此外,我们在不同大小的训练集上进行了训练。】
split | TOEFL | WECCL | GRE | total |
---|---|---|---|---|
train | 3,058 | 2,715 | 980 | 6,753 |
dev | 300 | 300 | 100 | 700 |
test | 300 | 300 | 100 | 700 |
Accuracy of human evaluators on the TOEFL split of test set is only 64.65%
, far lagging behind ML detectors/classifiers. 【对测试集中的托福文章进行图灵测试,人类参与者的准确率仅有64.65%
,远低于基于机器学习的检测器。】
train data | test data | maj. bsln | RoBERTa | Best SVM | GPTZero | Guo et al. (2023) |
---|---|---|---|---|---|---|
doc-all | doc test | 50 | 99.38 | 95.14 | 98.86 | 89.86 |
doc-50% | doc test | 50 | 99.76 | 94.14 | - | - |
doc-25% | doc test | 50 | 99.14 | 93.86 | - | - |
doc-10% | doc test | 50 | 97.67 | 92.29 | - | - |
doc-all | para test | 52.62 | 74.58 | 83.61 | - | - |
para-all | para test | 52.62 | 97.88 | 90.55 | 92.11 | 79.95 |
doc-all | sent test | 54.18 | 49.73 | 72.15 | - | |
sent-all | sent test | 54.18 | 93.84 | 81 | 90.10 | 71.44 |
doc-all | ood-ma test | 100 | 97.00 | 72.20 | 53.40 | 59.20 |
doc-all | ood-hu test | 100 | 98.47 | 94.80 | 100.00 | 99.00 |
Notes: We broke down the dataset of essays into sentences and paragraphs, trained the models from document-, paragraph-, and sentence-level, and evaluated these classifiers. 【我们把文章拆分为段落和句子,并且分别从文章、段落和句子的层次训练了模型,并进行了一些测试。】
Team members 团队介绍
We are a group of students who are interested in language, linguistics, and NLP as well. Led by Hai Hu, an assistant professor from English Department of Shanghai Jiao Tong University (SJTU), whose research interest is computational linguistics, we hope to contribute something interesting to CL and NLP commuities from the perspective of language leaners. 【大家好,我们是一群热爱语言、同时也对NLP技术感兴趣的本科/研究生,团队由交大外院的胡海老师指导。我们会从语言学习者的角度,进行一些有趣的实验与研究。如果有幸的话,希望我们的研究能够对计算语言学和NLP社群做出一些小小的贡献!】
Name | 姓名 | Affiliation | 所属机构 | Status |
---|---|---|---|---|
Hai Hu | 胡海 | SFL, SJTU | 上海交通大学外院 | Assistant Professor |
Yiwen Zhang | 张伊文 | Amazon | 亚马逊 | Language Engineer |
Shisen Yue | 岳士森 | SFL, SJTU | 上海交通大学外院 | Undergraduate |
Wanyang Zhang | 章万扬 | SS, PKU | 北京大学软件与微电子学院 | Graduate |
Xiaojing Zhao | 赵晓靖 | SFL, SJTU | 上海交通大学外院 | Graduate |
Xinyuan Cheng | 程心远 | SFL, SJTU | 上海交通大学外院 | Undergraduate |
Yikang Liu | 刘逸康 | SFL, SJTU | 上海交通大学外院 | Graduate |
Ziyin Zhang | 张子殷 | SEIEE, SJTU | 上海交通大学电院 | Graduate |
Citation
Please cite our work as
@misc{liu2023argugpt,
title={ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models},
author={Yikang Liu and Ziyin Zhang and Wanyang Zhang and Shisen Yue and Xiaojing Zhao and Xinyuan Cheng and Yiwen Zhang and Hai Hu},
year={2023},
eprint={2304.07666},
archivePrefix={arXiv},
primaryClass={cs.CL}
}