Skip to content
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems πŸ”ŽπŸ€–πŸ”§
Branch: master
Clone or download
shlomihod Merge pull request #18 from EthicallyAI/dev
Version 0.0.3 - Properly Merged
Latest commit 8af2aa8 Apr 11, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
bin
docs Version 0.0.3 (#14) Apr 10, 2019
ethically Version 0.0.3 (#14) Apr 10, 2019
tests commeting cli part Aug 16, 2018
.appveyor.yml add appveyor Aug 16, 2018
.coveragerc add more ci Aug 16, 2018
.gitignore Version 0.0.3 (#14) Apr 10, 2019
.isort.cfg
.pycodestyle.ini adding ignore Aug 11, 2018
.pydocstyle.ini
.pylint.ini Version 0.0.3 (#14) Apr 10, 2019
.pyup.yml
.scrutinizer.yml fix config file Aug 16, 2018
.travis.yml Version 0.0.3 (#14) Apr 10, 2019
.verchew.ini Version 0.0.3 (#14) Apr 10, 2019
CHANGELOG.rst fixing indentantion Apr 10, 2019
CONTRIBUTING.rst
LICENSE
MANIFEST.in Version 0.0.3 (#14) Apr 10, 2019
Makefile Version 0.0.3 (#14) Apr 10, 2019
Pipfile Version 0.0.3 (#14) Apr 10, 2019
Pipfile.lock Version 0.0.3 (#14) Apr 10, 2019
README.rst
pytest.ini
scent.py fixing style Aug 11, 2018
setup.py

README.rst

Ethically

Join the chat at https://gitter.im/EthicallyAI/ethically

Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems πŸ”ŽπŸ€–πŸ”§

Ethically is developed for practitioners and researchers in mind, but also for learners. Therefore, it is compatible with data science and machine learning tools of trade in Python, such as Numpy, Pandas, and especially scikit-learn.

The primary goal is to be one-shop-stop for auditing bias and fairness of machine learning systems, and the secondary one is to mitigate bias and adjust fairness through algorithmic interventions. Besides, there is a particular focus on NLP models.

Ethically consists of three sub-packages:

  1. ethically.dataset
    Collection of common benchmark datasets from fairness research.
  2. ethically.fairness
    Demographic fairness in binary classification, including metrics and algorithmic interventions.
  3. ethically.we
    Metrics and debiasing methods for bias (such as gender and race) in words embedding.

For fairness, Ethically's functionality is aligned with the book Fairness and Machine Learning - Limitations and Opportunities by Solon Barocas, Moritz Hardt and Arvind Narayanan.

If you would like to ask for a feature or report a bug, please open a new issue or write us in Gitter.

Requirements

  • Python 3.5+

Installation

Install ethically with pip:

$ pip install ethically

or directly from the source code:

$ git clone https://github.com/EthicallyAI/ethically.git
$ cd ethically
$ python setup.py install

Citation

If you have used Ethically in a scientific publication, we would appreciate citations to the following:

@Misc{,
  author =    {Shlomi Hod},
  title =     {{Ethically}: Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems},
  year =      {2018--},
  url = "http://docs.ethically.ai/",
  note = {[Online; accessed <today>]}
}
You can’t perform that action at this time.