fairlib is a Python framework for assessing and improving fairness. Built-in algorithms can be applied to text inputs, structured inputs, and image inputs.
The fairlib package includes metrics for fairness evaluation, algorithms for bias mitigation, and functions for analysis.
For those who want to start with fairlib now, you can try our Colab Tutorial, which provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper introduction. The complete API is also available.
fairlib currently requires Python3.7+ and Pytorch 1.10 or higher.
Dependencies of the core modules are listed in requirements.txt
.
We strongly recommend using a venv or conda environment for installation.
Standard Installation
If you do not need further modifications, you can install it with:
# Start a new virtual environment:
conda create -n fairlib python=3.7
conda activate fairlib
pip install fairlib
Development Installation
To set up a development environment, run the following commands to clone the repository and install fairlib:
git clone https://github.com/HanXudong/fairlib.git ~/fairlib
cd ~/fairlib
python setup.py develop
Benchmark Datasets
Please refer to data/README.md for a list of fairness benchmark datasets.
The full description of fairlib usages can be found in fairlib cheat sheet and API reference. Here are the most basic examples.
-
fairlib can be run from the command line:
python fairlib --exp_id EXP_NAME
-
fairlib can be imported as a package
from fairlib.base_options import options from src import networks config_file = 'opt.yaml' # Get options state = options.get_state(conf_file=config_file) # Init the model model = networks.get_main_model(state) # Training with debiasing model.train_self()
Besides the classical loss- and performance-based model selection, we provide performance-fairness trade-off based model selection (see the paper below).
Please see this tutorial for an example of loading training history, performing model selections based on different strategies, and creating basic plots. Moreover, interactive plots are also supported, which can be used for analysis.
None are known at this time.
If you have any problem with our code or have some suggestions, including the future feature, feel free to contact
- Xudong Han (xudongh1@student.unimelb.edu.au)
or describe it in Issues.
fairlib: A Unified Framework for Assessing and Improving Classification Fairness
Cite Us
@article{han2022fairlib,
title={fairlib: A Unified Framework for Assessing and Improving Classification Fairness},
author={Han, Xudong and Shen, Aili and Li, Yitong and Frermann, Lea and Baldwin, Timothy and Cohn, Trevor},
journal={arXiv preprint arXiv:2205.01876},
year={2022}
}
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.
This project is distributed under the terms of the APACHE LICENSE, VERSION 2.0. The license applies to all files in the GitHub repository hosting this file.
- https://github.com/HanXudong/Decoupling_Adversarial_Training_for_Fair_NLP
- https://github.com/HanXudong/Diverse_Adversaries_for_Mitigating_Bias_in_Training
- https://github.com/SsnL/dataset-distillation
- https://github.com/huggingface/torchMoji
- https://github.com/mhucka/readmine
- https://github.com/yanaiela/demog-text-removal
- https://github.com/lrank/Robust_and_Privacy_preserving_Text_Representations
- https://github.com/yuji-roh/fairbatch
- https://github.com/shauli-ravfogel/nullspace_projection
- https://github.com/AiliAili/contrastive_learning_fair_representations
- https://github.com/AiliAili/Difference_Mean_Fair_Models