A Python package to assess and improve fairness of machine learning models.
-
Updated
Jun 7, 2024 - Python
A Python package to assess and improve fairness of machine learning models.
🐢 Open-Source Evaluation & Testing for LLMs and ML models
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
This repository is for our ISSTA 2024 paper: A Large-Scale Empirical Study on Improving the Fairness of Image Classification Models
Source code for KDD 2020 paper "Algorithmic Decision Making with Conditional Fairness".
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
A fairness library in PyTorch.
👋 Influenciae is a Tensorflow Toolbox for Influence Functions
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
[ICMLA 2021] Jurity: Fairness & Evaluation Library
Massively Annotated DeepFake Databases - IEEE Transactions on Technology and Society 2024
XAIoGraphs (eXplainability Articicial Intelligence over Graphs) is an Explicability and Fairness Python library for classification problems with tabulated and discretized data.
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
Official pytorch implementation of "Learning fair representation with a parametric integral probability metric" published in ICML 2022.
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
Unofficial implementation of paper "Flexibly Fair Representation Learning by Disentanglement"
About Unofficial implementation of paper "Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization"
Add a description, image, and links to the fairness-ai topic page so that developers can more easily learn about it.
To associate your repository with the fairness-ai topic, visit your repo's landing page and select "manage topics."