Skip to content
/ fairlib Public
forked from Libr-AI/fairlib

A framework for assessing and improving classification fairness.

License

Notifications You must be signed in to change notification settings

glkuzi/fairlib

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

fairlib

fairlib is a Python framework for assessing and improving fairness. Built-in algorithms can be applied to text inputs, structured inputs, and image inputs.

The fairlib package includes metrics for fairness evaluation, algorithms for bias mitigation, and functions for analysis.

For those who want to start with fairlib now, you can try our Colab Tutorial, which provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper introduction. The complete API is also available.

Installation

fairlib currently requires Python3.7+ and Pytorch 1.10 or higher. Dependencies of the core modules are listed in requirements.txt. We strongly recommend using a venv or conda environment for installation.

Standard Installation

If you do not need further modifications, you can install it with:

# Start a new virtual environment:
conda create -n fairlib python=3.7
conda activate fairlib

pip install fairlib

Development Installation

To set up a development environment, run the following commands to clone the repository and install fairlib:

git clone https://github.com/HanXudong/fairlib.git ~/fairlib
cd ~/fairlib
python setup.py develop

Benchmark Datasets

Please refer to data/README.md for a list of fairness benchmark datasets.

Usage

The full description of fairlib usages can be found in fairlib cheat sheet and API reference. Here are the most basic examples.

  • fairlib can be run from the command line:

    python fairlib --exp_id EXP_NAME
  • fairlib can be imported as a package

    from fairlib.base_options import options
    from src import networks
    
    config_file = 'opt.yaml'
    # Get options
    state = options.get_state(conf_file=config_file)
    
    # Init the model
    model = networks.get_main_model(state)
    
    # Training with debiasing
    model.train_self()

Model Selection and Fairness Evaluation

Besides the classical loss- and performance-based model selection, we provide performance-fairness trade-off based model selection (see the paper below).

Please see this tutorial for an example of loading training history, performing model selections based on different strategies, and creating basic plots. Moreover, interactive plots are also supported, which can be used for analysis.

Known issues and limitations

None are known at this time.

Getting help

If you have any problem with our code or have some suggestions, including the future feature, feel free to contact

or describe it in Issues.

Paper

fairlib: A Unified Framework for Assessing and Improving Classification Fairness

Cite Us

@article{han2022fairlib,
  title={fairlib: A Unified Framework for Assessing and Improving Classification Fairness},
  author={Han, Xudong and Shen, Aili and Li, Yitong and Frermann, Lea and Baldwin, Timothy and Cohn, Trevor},
  journal={arXiv preprint arXiv:2205.01876},
  year={2022}
}

Contributing

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.

License

This project is distributed under the terms of the APACHE LICENSE, VERSION 2.0. The license applies to all files in the GitHub repository hosting this file.

Acknowledgments

About

A framework for assessing and improving classification fairness.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 90.2%
  • Python 9.8%