Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 

Adversarial QA

Paper

Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension

Dataset

Version 1.0 is available here: https://dl.fbaipublicfiles.com/dynabench/qa/aqa_v1.0.zip.

For further details see adversarialQA.github.io

Demo Image

Leaderboard

If you want to have your model added to the leaderboard, please submit your model predictions to the live leaderboard on Dynabench.

Model Reference Overall (F1)
RoBERTa-Large Liu et al., 2019 64.4%
BERT-Large Devlin et al., 2018 62.7%
BiDAF Seo et al., 2016 28.5%

Implementation

For training and evaluating BiDAF models, we use AllenNLP.

For training and evaluating BERT and RoBERTa models, we use Transformers.

We welcome researchers from various fields (linguistics, machine learning, cognitive science, psychology, etc.) to work on adversarialQA. You can use the code to reproduce the results in our paper or even as a starting point for your research.

We use SQuAD v1.1 as training data for the adversarial models used in the data collection process. We also combine this dataset with the datasets we collect for some of our experiments.

Other References

We use the following resources in training our models used for adversarial human annotation and in our analysis:

Citation

@article{bartolo2020beat,
  title={Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension},
  author={Bartolo, Max and Roberts, Alastair and Welbl, Johannes and Riedel, Sebastian and Stenetorp, Pontus},
  journal={arXiv preprint arXiv:2002.00293},
  year={2020}
}

License

AdversarialQA is licensed under Creative Commons-Non Commercial 4.0. See the LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Releases

No releases published

Packages

No packages published