Skip to content
Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data
models
tfcode
utils
.gitignore
README.md
compute_dp_sgd_privacy.py
helper.py
image_helper.py
inception.py
playing.py
playing_nlp.py
requirements.txt
text_helper.py

README.md

Readme

The paper discusses how Differential Privacy (specifically DPSGD from [1]) impacts model performance for underrepresented groups.

Usage

Configure environment by running: pip install -r requirements.txt

We use Python3.7 and GPU Nvidia TitanX.

File playing.py allows run the code. It uses utils/params.yaml to set parameters from the paper and builds a graph on Tensorboard. For Sentiment prediction we use playing_nlp.py.

Datasets:

  1. MNIST (part of PyTorch)
  2. Diversity in Faces (obtained from IBM here)
  3. iNaturalist (download from here)
  4. UTKFace (from here)
  5. AAE Twitter corpus (from here)

We use compute_dp_sgd_privacy.py copied from public repo

DP-FedAvg implementation is taken from public repo

Implementation of DPSGD is based on TF Privacy repo and papers:

[1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In CCS, 2016.

[2] H. B. McMahan and G. Andrew. A general approach to adding differential privacy to iterative training procedures. arXiv:1812.06210, 2018

[3] H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang. Learning differentially private recurrent language models. In ICLR, 2018

You can’t perform that action at this time.