Codebase for "Fair-GAIN" for fair ML predictions.
-
Updated
Mar 24, 2023 - Python
Codebase for "Fair-GAIN" for fair ML predictions.
Hmumu classifiers trained to be fair w.r.t. the invariant mass.
Demos for my blog, Ponder @ ponder.substack.com
This workflow is generated by Geoweaver to replicate the experiments done in EmissionAI's published article. Currently, the workflow retrieves the data from an already compiled source, but the next version will make this workflow collect data from the different data sources directly.
Unofficial implementation of paper "Flexibly Fair Representation Learning by Disentanglement"
Fair Classification with Gaussian Process (FCGP)
Package implementing methods developed in "Preventing Fairness Gerrymandering" [ICML '18], "Rich Subgroup Fairness for Machine Learning" [ FAT* '19]. active development fork @algowatchupenn
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.
Code for fair-representation learning path
XAIoGraphs (eXplainability Articicial Intelligence over Graphs) is an Explicability and Fairness Python library for classification problems with tabulated and discretized data.
Planning to Fairly Allocate: Probabilistic Fairness in the Restless Bandit Setting (KDD 2023)
CIRCLe: Color Invariant Representation Learning for Unbiased Classification of Skin Lesions. Mirror of https://github.com/arezou-pakzad/CIRCLe
Official pytorch implementation of "Learning fair representation with a parametric integral probability metric" published in ICML 2022.
The Vivaldy (VerIfication and Validation of Ai-enabLeD sYstems) analysis tool and dashboard allows for automated subgroup performance analysis for AI algorithms.
Measures stability of the fairness measure for a fair AI
About Unofficial implementation of paper "Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization"
This repository is for our ISSTA 2024 paper: A Large-Scale Empirical Study on Improving the Fairness of Image Classification Models
Code implementing differential fairness (DF) metric in: J. R. Foulds, R. Islam, K. Keya, and S. Pan. An Intersectional Definition of Fairness. 36th IEEE International Conference on Data Engineering (ICDE), 2020
Code to accompany NeurIPS paper https://arxiv.org/abs/2006.08564
Add a description, image, and links to the fairness-ai topic page so that developers can more easily learn about it.
To associate your repository with the fairness-ai topic, visit your repo's landing page and select "manage topics."