A curated list of resources on bias mitigation research for ML/CV researchers and engineers.
Presentations and talks
- The danger of predictive algorithms in criminal justice - Dr. Hany Farid talks about how his team reverse engineered the inherent dangers and potential biases of recommendations engines built to mete out justice in today's criminal justice system.
- Stanford HAI 2019 Fall Conference - Race, Rights and Facial Recognition
Papers and articles
- The long road to fairer algorithms - A short article on the benefits of using causal models to identify and mitigate discrimination.
- Gender Classification and Bias Mitigation in Facial Images - A paper describing the failure of facial datasets in regards to non-binary gender and introduces a new dataset focused around the complexities of gender assignment. Uses
selection rate
as a measure of bias. - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks - The authors propose a method to model the confidence of the models with respect to the input space.
- Statistical Equity: A Fairness Classification Objective - Proposes a new fairness definition, motivated by the principle of equity, that considers existing biases in the data and attempts to make equitable decisions that account for these previous historical biases.
- Face Recognition: Too Bias, or Not Too Bias? - The conventional approach of learning a global threshold for all pairs results in performance gaps between subgroups. By learning subgroup-specific thresholds, we reduce performance gaps, and also show a notable boost in overall performance.
- DeBayes: a Bayesian Method for Debiasing Network Embeddings - DeBayes: a conceptually elegant Bayesian method that is capable of learning debiased embeddings by using a biased prior. Our experiments show that these representations can then be used to perform link prediction that is significantly more fair in terms of popular metrics such as demographic parity and equalized opportunity.
- Two Simple Ways to Learn Individual Fairness Metrics from Data - Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness. In this paper, we present two simple ways to learn fair metrics from a variety of data types. We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
- Data preprocessing to mitigate bias: A maximum entropy based approach - This paper presents an algorithmic framework that can be used as a data preprocessing method towards mitigating bias. Unlike prior work, it can efficiently learn distributions over large domains, controllably adjust the representation rates of protected groups and achieve target fairness metrics such as statistical parity, yet remains close to the empirical distribution induced by the given dataset.
- Fair Generative Modeling via Weak Supervision - We present a weakly supervised algorithm for overcoming dataset bias for deep generative models.
- SDE-Net: Equipping Deep Neural Networks with Uncertainty Estimates - We propose a new method for quantifying uncertainties of DNNs from a dynamical system perspective. The core of our method is to view DNN transformations as state evolution of a stochastic dynamical system and introduce a Brownian motion term for capturing epistemic uncertainty.
- Conditional Learning of Fair Representations - We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting.
- Exploring Racial Bias within Face Recognition via per-subject Adversarially-Enabled Data Augmentation - In this study, we propose a novel adversarial derived data augmentation methodology that aims to enable dataset balance at a per-subject level via the use of image-to-image transformation for the transfer of sensitive racial characteristic facial features.
- Revisiting the Evaluation of Uncertainty Estimation and Its Application to Explore Model Complexity-Uncertainty Trade-Off - In this paper, we focus on the two main use cases of uncertainty estimation, i.e., selective prediction and confidence calibration.
- Attribute Aware Filter-Drop for Bias Invariant Classification - This research proposes a novel Filter-Drop algorithm for learning unbiased representations.
Workshops and competitions
- Fair Face Recognition and Analysis - 2020 ChaLearn Looking at People workshop ECCV
- Fair Face Recognition challenge - ECCV 2020
- FAIR, DATA EFFICIENT AND TRUSTED COMPUTER VISION - IEEE CVPR 2020 WORKSHOP
Repositories and Code
Acknowledgments
This project is generously supported by Trueface.ai.