Skip to content

git-disl/Fed-CDP

Repository files navigation

Fed-CDP: Gradient Leakage Resilient Federated Learning

description

This is the $\alpha$ version of Fed-CDP. We are working on $\beta$ and later versions.

This is the code for differentially private federated learning (our paper) that is resilient to gradient privacy leakage. For gradient leakage attack, check CPL attack and attack code. An adaptation for centralized setting can be found (our paper).

Federated learning faces three types of gradient leakage threats basing on the place of leakage

threat model

Existing approaches for federated learning with differential privacy (coined as Fed-SDP) concerns only the client-level differential privacy with per-client per-round noise.

fed-sdp

Our approach for federated learning with differential privacy (refered to as Fed-CDP) retrospects the instance-level differential privacy guarantee with per-example per-local iteration noise. With some differential privacy properties, the instance-level differential privacy guarantee also ensures the client-level differential privacy in federated learning.

fed-cdp

how to run

  • First install the conda environment with environment.yml.

  • Then run create_FLdistribution.sh to create a client distribution first (each client has two shards). The distribution file xxx_1000_clients.pkl would appear in the client folder and you may run the rest.

  • FedSDP.py contains the code for training a benign and existing differentially private federated learning model (McMahan, H. Brendan, Daniel Ramage, Kunal Talwar, and Li Zhang. "Learning differentially private recurrent language models." arXiv preprint arXiv:1710.06963 (2017).). In this differentially private model, we consider Fed-SDP which adds per-client per-round noise, and only holds client level differential privacy.

  • FedCDP.py contains the proposed Fed-CDP approach which adds per-example per-local iteration noise, instance level differential privacy. Due to the composition theorem, the instance-level noise provide both example-level and client-level differential privacy.

  • privacy_accounting_fed_clientlevel.py and privacy_accounting_fed_instancelevel.py are codes for computing epsilon privacy spending at client level (for Fed-SDP with sampling rate = #participating clients/#total clients) and at instance level (for Fed-CDP with sampling rate = batch size * # participating client / # global data). We consider five privacy accounting methods: base composition, advanced composition, optimal composition, zCDP and Moments accountant.

  • For gradient leakage attacks, please refer to our CPL attacks.

If you are interested in our research, please cite:

@inproceedings{wei2020framework,
  title={A framework for evaluating client privacy leakages in federated learning},
  author={Wei, Wenqi and Liu, Ling and Loper, Margaret and Chow, Ka-Ho and Gursoy, Mehmet Emre and Truex, Stacey and Wu, Yanzhao},
  booktitle={European Symposium on Research in Computer Security},
  year={2020},
 publisher={Springer}
}

@inproceedings{wei2021gradient,
  title={Gradient-Leakage Resilient Federated Learning},
  author={Wei, Wenqi and Liu, Ling and Wu, Yanzhao and Su, Gong and Iyengar, Arun},
booktitle={International Conference on Distributed Computing Systems},
  year={2021},
 publisher={IEEE}}

@article{wei2021gradient_tifs,
 title={Gradient Leakage Attack Resilient Deep Learning},  
 author={Wei, Wenqi and Liu, Ling},
  journal={Transactions on Information Forensics and Security},
  year={2021},
 publisher={IEEE}}

}

Feel free to send me an email at wenqiwei@gatech.edu or raise an issue if you have any questions.

About

Gradient-Leakage Resilient Federated Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published