Skip to content

Code for Debiasing Vision-Language Models via Biased Prompts

License

Notifications You must be signed in to change notification settings

chingyaoc/debias_vl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Debiasing Vision-Language Models via Biased Prompts

Machine learning models have been shown to inherit biases from their training datasets, which can be particularly problematic for vision-language foundation models trained on uncurated datasets scraped from the internet. The biases can be amplified and propagated to downstream applications like zero-shot classifiers and text-to-image generative models. In this study, we propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding. In particular, we show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models. The closed-form solution enables easy integration into large-scale pipelines, and empirical results demonstrate that our approach effectively reduces social bias and spurious correlation in both discriminative and generative vision-language models without the need for additional data or training.

Debiasing Vision-Language Models via Biased Prompts, Preprint 2023 [paper]
Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba, and Stefanie Jegelka

Prerequisites

  • Python 3.6
  • PyTorch 1.10.1
  • PIL
  • diffuser
  • scikit-learn
  • clip
  • transformers

Code

Check the discriminative and generative folders.

Citation

If you find this repo useful for your research, please consider citing the paper

@article{chuang2023debiasing,
  title={Debiasing Vision-Language Models via Biased Prompts},
  author={Chuang, Ching-Yao and Varun, Jampani and Li, Yuanzhen and Torralba, Antonio and Jegelka, Stefanie},
  journal={arXiv preprint 2302.00070},
  year={2023}
}

For any questions, please contact Ching-Yao Chuang (cychuang@mit.edu).

Acknowledgements

The code of discriminative model is primarily inspired by the supplement of Zhang and Ré.

The code of generative model is primarily inspired by the huggingface example.

About

Code for Debiasing Vision-Language Models via Biased Prompts

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages