Skip to content
No description, website, or topics provided.
Python Shell
Branch: master
Clone or download
The ML Fairness Gym Team hansas
The ML Fairness Gym Team and hansas Internal change
PiperOrigin-RevId: 294286069
Latest commit d36e23f Feb 10, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
agents Internal change Feb 10, 2020
docs Internal change Feb 10, 2020
environments Internal change Feb 10, 2020
experiments Internal change Feb 10, 2020
metrics Internal change Jan 30, 2020
papers Internal change Dec 19, 2019
spaces Internal change Jan 30, 2020
AUTHORS Internal change Aug 7, 2019
CONTRIBUTING.md Internal change Aug 7, 2019
LICENSE Internal change Aug 7, 2019
README.md Internal change Jan 30, 2020
core.py Internal change Jan 30, 2020
core_test.py Internal change Jan 30, 2020
distributions.py Internal change Jan 30, 2020
distributions_test.py Internal change Jan 30, 2020
file_util.py Internal change Jan 30, 2020
file_util_test.py Internal change Jan 30, 2020
params.py Internal change Jan 30, 2020
params_test.py Internal change Jan 30, 2020
requirements.txt Internal change Dec 19, 2019
rewards.py Internal change Jan 30, 2020
rewards_test.py Internal change Jan 30, 2020
run.sh Internal change Jan 30, 2020
run_util.py Internal change Jan 30, 2020
runner.py Internal change Jan 30, 2020
runner_lib.py Internal change Jan 30, 2020
runner_lib_test.py Internal change Jan 30, 2020
test_util.py Internal change Jan 30, 2020
tests.sh Internal change Jan 30, 2020

README.md

What is ML-fairness-gym?

ML-fairness-gym is a set of components for building simple simulations that explore the potential long-run impacts of deploying machine learning-based decision systems in social environments. As the importance of machine learning fairness has become increasingly apparent, recent research has focused on potentially surprising long term behaviors of enforcing measures of fairness that were originally defined in a static setting. Key findings have shown that under specific assumptions in simplified dynamic simulations, long term effects may in fact counteract the desired goals. Achieving a deeper understanding of such long term effects is thus a critical direction for ML fairness research. ML-fairness-gym implements a generalized framework for studying and probing long term fairness effects in carefully constructed simulation scenarios where a learning agent interacts with an environment over time. This work fits into a larger push in the fair machine learning literature to design decision systems that induce fair outcomes in the long run, and to understand how these systems might differ from those designed to enforce fairness on a one-shot basis.

This initial version of the ML-fairness-gym (v 0.1.0) focuses on reproducing and generalizing environments that have previously been discussed in research papers.

ML-fairness-gym environments implement the environment API from OpenAI Gym.

This is not an officially supported Google product.

Contents

Contact us

The ML fairness gym project discussion group is: ml-fairness-gym-discuss@google.com.

Versions

v0.1.0: Initial release.

You can’t perform that action at this time.