Skip to content

Commit

Permalink
Remove gender classification (#102)
Browse files Browse the repository at this point in the history
* remove gender classification tutorial.
  • Loading branch information
pronics2004 authored and hoffmansc committed Sep 9, 2019
1 parent 2e13da8 commit fd91d19
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 1,084 deletions.
2 changes: 0 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@ the user through the various steps of the notebook.
## Tutorials
The [Credit scoring](https://nbviewer.jupyter.org/github/IBM/AIF360/blob/master/examples/tutorial_credit_scoring.ipynb) tutorial is the recommended first tutorial to get an understanding for how AIF360 works. It first provides a brief summary of a machine learning workflow and an overview of AIF360. It then demonstrates the use of one fairness metric (mean difference) and one bias mitigation algorithm (optimized preprocessing) in the context of age bias in a credit scoring scenario using the [German Credit dataset](https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29).

The [Gender classification of face images](https://nbviewer.jupyter.org/github/IBM/AIF360/blob/master/examples/tutorial_gender_classification.ipynb) tutorial provides a more comprehensive use case of detecting and mitigating bias in the automatic gender classification of facial images. The tutorial demonstrates the use of AIF360 to study the differential performance of a custom classifier. It uses several fairness metric (statistical parity difference, disparate impact, equal opportunity difference, average odds difference, and Theil index) and the reweighing mitigation algorithm. It works with the [UTK dataset](https://susanqq.github.io/UTKFace/)

The [Medical expenditure](https://nbviewer.jupyter.org/github/IBM/AIF360/blob/master/examples/tutorial_medical_expenditure.ipynb) tutorial is a comprehensive tutorial demonstrating the interactive exploratory nature of a data scientist detecting and mitigating racial bias in a care management scenario. It uses a variety of fairness metrics (disparate impact, average odds difference, statistical parity difference, equal opportunity difference, and Theil index) and algorithms (reweighing, prejudice remover, and disparate impact remover). It also demonstrates how explanations can be generated for predictions made by models learned with the toolkit using LIME.
Data from the Medical Expenditure Panel Survey ([2015](https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-181) and [2016](https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-192)) is used in this tutorial.

Expand Down

0 comments on commit fd91d19

Please sign in to comment.