Code for paper "Bias and Fairness in Authorial Gender Attribution"
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
EthNLPGender.ipynb
LICENSE
README.md

README.md

Code for paper Bias and Fairness in Authorial Gender Attribution

Code and results used in the following paper: http://aclweb.org/anthology/W17-1602

The data is not included for copyright reasons.

Reference

@InProceedings{koolen2017stereotypes,
  author    = {Koolen, Corina  and  van Cranenburgh, Andreas},
  title     = {These are not the Stereotypes You are Looking For:
	Bias and Fairness in Authorial Gender Attribution},
  booktitle = {Proceedings of the First ACL Workshop on Ethics in Natural Language Processing},
  year      = {2017},
  pages     = {12--22},
  url       = {http://aclweb.org/anthology/W17-1602},
  abstract  = {Stylometric and text categorization results show that author gender can be
	discerned in texts with relatively high accuracy. However, it is difficult to
	explain what gives rise to these results and there are many possible
	confounding factors, such as the domain, genre, and target audience of a text.
	More fundamentally, such classification efforts risk invoking stereotyping and
	essentialism. We explore this issue in two datasets of Dutch literary novels,
	using commonly used descriptive (LIWC, topic modeling) and predictive (machine
	learning) methods. Our results show the importance of controlling for variables
	in the corpus and we argue for taking care not to overgeneralize from the
	results.},
}