Skip to content

Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.

License

Notifications You must be signed in to change notification settings

larroy/amazon-sagemaker-clarify

 
 

Repository files navigation

Python package Pypi Python 3.6+

smclarify

Amazon Sagemaker Clarify

Bias detection and mitigation for datasets and models.

Installation

To install the package from PIP you can simply do:

pip install smclarify

You can see examples on running the Bias metrics on the notebooks in the examples folder.

Terminology

Facet

A facet is column or feature that will be used to measure bias against. A facet can have value(s) that designates that sample as "sensitive".

Label

The label is a column or feature which is the target for training a machine learning model. The label can have value(s) that designates that sample as having a "positive" outcome.

Bias measure

A bias measure is a function that returns a bias metric.

Bias metric

A bias metric is a numerical value indicating the level of bias detected as determined by a particular bias measure.

Bias report

A collection of bias metrics for a given dataset or a combination of a dataset and model.

Development

It's recommended that you setup a virtualenv.

virtualenv -p(which python3) venv
source venv/bin/activate.fish
pip install -e .[test]
cd src/
../devtool all

For running unit tests, do pytest --pspec. If you are using PyCharm, and cannot see the green run button next to the tests, open Preferences -> Tools -> Python Integrated tools, and set default test runner to pytest.

About

Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.5%
  • Shell 1.5%