Skip to content

SonyResearch/multi_bias_amp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Men Also Do Laundry: Multi-Attribute Bias Amplification

Repository for accepted ICML 2023 paper "Men Also Do Laundry: Multi-Attribute Bias Amplification", which presents interpretable metrics for measuring bias amplification from multiple attributes

[Project Page]   |   [arXiv]

Abstract

As computer vision systems become more widely deployed, there is increasing concern from both the research community and the public that these systems are not only reproducing but amplifying harmful social biases. The phenomenon of bias amplification, which is the focus of this work, refers to models amplifying inherent training set biases at test time. Existing metrics measure bias amplification with respect to single annotated attributes (e.g., 𝚌𝚘𝚖𝚙𝚞𝚝𝚎𝚛). However, several visual datasets consist of images with multiple attribute annotations. We show models can learn to exploit correlations with respect to multiple attributes (e.g., {𝚌𝚘𝚖𝚙𝚞𝚝𝚎𝚛, 𝚔𝚎𝚢𝚋𝚘𝚊𝚛𝚍}), which are not accounted for by current metrics. In addition, we show current metrics can give the erroneous impression that minimal or no bias amplification has occurred as they involve aggregating over positive and negative values. Further, these metrics lack a clear desired value, making them difficult to interpret. To address these shortcomings, we propose a new metric: Multi-Attribute Bias Amplification. We validate our proposed metric through an analysis of gender bias amplification on the COCO and imSitu datasets. Finally, we benchmark bias mitigation methods using our proposed metric, suggesting possible avenues for future bias mitigation


Setup

To install the necessary packages, use the following command:

    pip install -r requirements.txt 

Requirements include Python >=3.6, numpy, scikit-learn, and tqdm.

Metrics

All of the necessary code files are in metrics/. There are three metrics we provide the implementations for as follows:

  • mba.py: Undirected and directed multi-attribute bias amplficiation (ours)
  • mals.py: Undirected single-attribute bias amplification from Zhao et al.
  • dba.py: Directed single-attribute bias amplification from Wang and Russakovsky

Misc

To train a multi-attribute classifier, use the files in src as follows:

Train Classifier:

python train.py --labels_train $TRAIN_LABELS --labels_val $VAL_LABELS --nepoch $NUM_EPOCHS \
--nclasses $NUM_ATTS --outdir $OUTDIR 

Evaluate Classifier:

python evaluate.py --labels_val $VAL_LABELS --labels_test $TEST_LABELS --modelpath $MODELPATH \
--nclasses $NUM_ATTS --outfile $OUTFILE

Bibtex

@inproceedings{zhao2023men,
    title={Men Also Do Laundry: Multi-Attribute Bias Amplification},
    author={Zhao, Dora and Andrews, Jerone TA and Xiang, Alice},
    booktitle={International Conference on Machine Learning (ICML)},
    year={2023}
}

Contact

For questions, please contact Dora Zhao (dora.zhao@sony.com)

About

Repo for accepted ICML 2023 paper "Men Also Do Laundry: Multi-Attribute Bias Amplification (https://arxiv.org/abs/2210.11924)"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages