Dataset and code for the NAACL 2022 paper "Benchmarking Intersectional Biases in NLP"
To recreate the figures and tables from the paper, please use the generate_plots.R file. This file is the main file for reproducing the results in the paper.
The NAACL2022_fairness.ipynb is included for demonstration purposes. This file is included as a reference for calculating the various metrics, but will need to be adapted to suit the user's needs (e.g., checking whether model predictions are correct/incorrect).
If you use this repository please cite the accompanying paper:
@inproceedings{lalor-etal-2022-benchmarking,
title = "Benchmarking Intersectional Biases in NLP",
author = "Lalor, John P. and
Yang, Yi and
Smith, Kendall and
Forsgren, Nicole and
Abbasi, Ahmed",
booktitle = "Proceedings of the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
year = "2022",
publisher = "Association for Computational Linguistics",
}