A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
Updated
May 26, 2024 - Python
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
A Python package to assess and improve fairness of machine learning models.
🐢 Open-Source Evaluation & Testing for LLMs and ML models
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
[ACL 2020] Towards Debiasing Sentence Representations
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments
[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models
A tool for gender bias identification in text. Part of Microsoft's Responsible AI toolbox.
PyTorch package to train and audit ML models for Individual Fairness
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
Counterfactual Local Explanations of AI systems
Source code for KDD 2020 paper "Algorithmic Decision Making with Conditional Fairness".
FairBatch: Batch Selection for Model Fairness (ICLR 2021)
[ICMLA 2021] Jurity: Fairness & Evaluation Library
PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuation of Data" by Amirata Ghorbani and James Zou [ICML 2019]
Code to accompany NeurIPS paper https://arxiv.org/abs/2006.08564
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
Add a description, image, and links to the fairness-ai topic page so that developers can more easily learn about it.
To associate your repository with the fairness-ai topic, visit your repo's landing page and select "manage topics."