Skip to content

yanyiwei/clip-gender-bias

Repository files navigation

Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias

This repo contains code for the paper Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias by Robert Wolfe, Yiwei Yang, Bill Howe, Aylin Caliskan.

The SOBEM dataset can be obtained here.

The code for running Embedding Association Tests (EAT) is in the folders Embedding Collection, Emotion Associations, and Profession Associations. Embedding Collection contains code for computing embeddings for the datasets: OASIS, SOBEM, and professional images. Emotion Associations and Profession Associations contain code for computing the association effect sizes and p values.

For example, to run EAT on images of professionals, run python all_sex_profession_collection.py in directory Embedding Collection, then run python all_sex_profession_associations.py in directory Profession Associations.

The GradCAM saliency maps can be obtained by executing the jupyter notebook CLIP_GradCAM_Visualization.ipynb.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published