Skip to content

navdeep-G/fair-ml

Repository files navigation

Discrimination in Machine Learning

This presentation introduces methods that can uncover discrimination in your data and predictive models, including the adverse impact ratio (AIR), false positive and false negative rates, marginal effects, and standardized mean difference. Once discrimination is identified in a model, new models with less discrimination can usually be found, typically by more judicious feature selection or by tweaking hyperparameters. Mitigating discrimination in ML is important for both consumers and operators of ML. Consumers of ML deserve equitable decisions and predictions and operators of ML want to avoid reputational and regulatory damages.

This presentation was also presented in a webinar on Jan 23, 2020.