A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
Updated
Jul 5, 2024 - Python
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
XAI - An eXplainability toolbox for machine learning
Bias Auditing & Fair ML Toolkit
Automatic synthesis of RCTs
Can we use explanations to improve hate speech models? Our paper accepted at AAAI 2021 tries to explore that question.
Bluetooth Impersonation AttackS (BIAS) [CVE 2020-10135]
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering
A reading list and fortnightly discussion group designed to provoke discussion about ethical applications of, and processes for, data science.
Identify bias and measure fairness of your data
Compass-aligned Distributional Embeddings. Align embeddings from different corpora
[CCKS 2021] On Robustness and Bias Analysis of BERT-based Relation Extraction
A Python toolkit for analyzing machine learning models and datasets.
Symmetric evaluation set based on the FEVER (fact verification) dataset
A Multidimensional Dataset for Analyzing and Detecting News Bias based on Crowdsourcing
Cross-lingual version of WEAT
Bias correction method using quantile mapping
This is the code repository for the IJCAI-21 paper "Understanding the Effect of Bias in Deep Anomaly Detection".
Official repo for "Characterizing Stigmatizing Language in Medical Records" (ACL 2023)
Add a description, image, and links to the bias topic page so that developers can more easily learn about it.
To associate your repository with the bias topic, visit your repo's landing page and select "manage topics."