Skip to content

MICCAI 2023 — Data AUDIT: Identifying Attribute Utility- and Detectability-Induced Bias in Task Models

Notifications You must be signed in to change notification settings

mpavlak25/data-audit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Data AUDIT: Identifying Attribute Utility- and Detectability-Induced Bias in Task Models

MICCAI 2023

We seek to audit at the dataset level to develop targeted hypotheses on the bias downstream models will inherit. We focus on identifying potential shortcuts, and define two metrics we term “utility” and “detectability” respectively. Utility measures how much information knowing an attribute conveys about the task label. For detectability, we seek to measure how well a downstream model could extract the values of the attribute from the image set, excluding task related information.

Code Information:

Please note, full results require training of several hundred models. For your convenience, the codebase is split into separate config files to recreate each experiment. To run, use for example: python ./train_test.py --experiment_cfg_path C:/path/to/repo/dmaudit/experiments/Experiment_0_Mutual/experiment_0_config_autoaugment.py

More details and further code cleaning coming shortly.

About

MICCAI 2023 — Data AUDIT: Identifying Attribute Utility- and Detectability-Induced Bias in Task Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published