Train Gradient Boosting models that are both high-performance *and* Fair!
-
Updated
May 31, 2024 - C++
Train Gradient Boosting models that are both high-performance *and* Fair!
Investigating and Mitigating the Bias-Accuracy Tradeoff via Protected-Category Sampling
A Python package to assess and improve fairness of machine learning models.
This hosts the code and appendix of the SIGIR 2024 full paper "Can We Trust Recommender System Fairness Evaluation: The Role of Fairness and Relevance"
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
Official implementation of our work "Collaborative Fairness in Federated Learning."
Source code for KDD 2020 paper "Algorithmic Decision Making with Conditional Fairness".
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
SSA is a post-hoc explanation method by stereotypes and counter-stereotypes to assess social bias in hate speech classifiers
Trustworthy AI/ML course by Professor Birhanu Eshete, University of Michigan, Dearborn.
A Python package for mitigating bias in tabular data.
Binary classification, SHAP (Explainable Artificial Intelligence), and Grid Search (for tuning hyperparameters) using EfficientNetV2-B0 on Cat VS Dog dataset.
A fairness library in PyTorch.
A collection of machine learning and reinforcement learning algorithms for my PhD research and/or personal education.
😎 Everything about class-imbalanced/long-tail learning: papers, codes, frameworks, and libraries | 有关类别不平衡/长尾学习的一切:论文、代码、框架与库
Evaluating ensemble performance in long-tailed datasets (Neurips 2023 Heavy Tails Workshop)
Tensorflow's Fairness Evaluation and Visualization Toolkit
ML Testing for Everyone. Find issues before they become problems.
An automated, collaborative ethical bias auditing platform for ML models. Demo:https://youtu.be/8mE_vLP9TYc
Add a description, image, and links to the fairness-ml topic page so that developers can more easily learn about it.
To associate your repository with the fairness-ml topic, visit your repo's landing page and select "manage topics."