Home

Awesome

Adversarial QA

Paper

Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension

Dataset

Version 1.0 is available here: https://dl.fbaipublicfiles.com/dynabench/qa/aqa_v1.0.zip.

For further details see adversarialQA.github.io

Demo Image

Leaderboard

If you want to have your model added to the leaderboard, please submit your model predictions to the live leaderboard on Dynabench.

ModelReferenceOverall (F1)
RoBERTa-LargeLiu et al., 201964.4%
BERT-LargeDevlin et al., 201862.7%
BiDAFSeo et al., 201628.5%

Implementation

For training and evaluating BiDAF models, we use AllenNLP.

For training and evaluating BERT and RoBERTa models, we use Transformers.

We welcome researchers from various fields (linguistics, machine learning, cognitive science, psychology, etc.) to work on adversarialQA. You can use the code to reproduce the results in our paper or even as a starting point for your research.

We use SQuAD v1.1 as training data for the adversarial models used in the data collection process. We also combine this dataset with the datasets we collect for some of our experiments.

Other References

We use the following resources in training our models used for adversarial human annotation and in our analysis:

Citation

@article{bartolo2020beat,
  title={Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension},
  author={Bartolo, Max and Roberts, Alastair and Welbl, Johannes and Riedel, Sebastian and Stenetorp, Pontus},
  journal={arXiv preprint arXiv:2002.00293},
  year={2020}
}

License

AdversarialQA is licensed under the MIT License. See the LICENSE file for details.