Skip to content
/ FCRO Public

On Fairness of Medical Image Classification with Multiple Sensitive Attributes via Learning Orthogonal Representations

Notifications You must be signed in to change notification settings

vengdeng/FCRO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository contains the PyTorch implementation of our IPMI 2023 paper "On Fairness of Medical Image Classification with Multiple Sensitive Attributes via Learning Orthogonal Representations".

Wenlong Deng*, Yuan Zhong*, Qi Dou, Xiaoxiao Li

[Paper]

Usage

Setup

pip

See requirement.txt for more environment configurations.

pip install -r requirements.txt

conda

We recommend setting up the environment via conda from exported environment.yaml.

conda env create -f environment.yml
conda activate fcro

Datasets

Please download the original CheXpert dataset here, and supplementary demographic data here.

In our paper, we use an augmented version of low-resolution CheXpert dataset. Please download the metadata of the augmented dataset here, and put it under the ./metadata/ directory.

Pretrained Models

Please download our pretrained models using 5-fold cross validation here, and put them under the ./checkpoint/ directory.

Run a single experiment

python train.py --image_path [image_path] --exp_path [exp_path] --metadata [metadata] \
--lr [lr] --weight_decay [weight_decay] --epoch [epoch] --batch_size [batch_size] \
-a [sensitive_attributes] --dim_rep [dim_rep] -wc [wc] -wr [wr] \
--subspace_thre [subspace_thre] -f [fold] --cond --moving_base --from_sketch

For more information, please execute python train.py -h for help.

Here is an example of how to run a experiment on fold 0 from sketch:

# Train from sketch, i.e., train the sensitive head first, and then the target head.
python train.py --image_path XXX -f 0 --cond --from_sketch

Here is another example of how to train the target model using a pretrained sensitive model:

python train.py --image_path XXX -f 0 --cond

By default, the pretrained sensitive model under the ./checkpoint/ directory will be used. If you want to customize it, please use --pretrained_path option.

To calculate column orthogonal loss using accumulative space construction variant, please use--moving_space option.

Test

After installing our pretrained model and metadata, you can reproduce our 5-fold cross validation results in our paper by running:

# Running test using model of fold 0. Please run full 5-fold to reproduce our results.
python train.py --test --image_path XXX -f 0

You may customize --pretrained_path and --sensitive_attributes commands to use other pretrained models or test on other sensitive attributes combinations.

Citation

If you find this work helpful, feel free to cite our paper as follows:

@article{deng2023fairness,
  title={On Fairness of Medical Image Classification with Multiple Sensitive Attributes via Learning Orthogonal Representations},
  author={Deng, Wenlong and Zhong, Yuan and Dou, Qi and Li, Xiaoxiao},
  journal={arXiv preprint arXiv:2301.01481},
  year={2023}
}

About

On Fairness of Medical Image Classification with Multiple Sensitive Attributes via Learning Orthogonal Representations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages