This repository provides a logistic regression implementation in Python for the fair classification mechanisms introduced in the AISTATS'17, WWW'17 and NIPS'17 papers.
We have summarised the key ideas discussed in the three aforementioned papers into a final report: "Who's the fairest of them all?". The report is "final_paper.pdf", and was graded a distinction (A grade).
-
Fairness Constraints: Mechanisms for Fair Classification
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi.
20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, April 2017. -
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi.
26th International World Wide Web Conference (WWW), Perth, Australia, April 2017. -
From Parity to Preference-based Notions of Fairness in Classification
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, Adrian Weller.
31st Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, December 2017.
- numpy, scipy and matplotlib if you are only using the mechanisms introduced in [1].
- Additionally, if you are using the mechanisms introduced in [2] and [3], then you also need to install CVXPY and DCCP.
- If you want to learn more about impact parity as described in [1], please navigate to the directory "disparate_impact".
- If you want to learn more about mistreatment parity as described in [2], please navigate to the directory "disparate_mistreatment".
- If you want to learn more about group-based, Pareto-optimal classifiers as described in [3], please navigate to the directory "preferential_fairness".