Skip to content

Latest commit

 

History

History
42 lines (32 loc) · 4.72 KB

gradient_inversion.md

File metadata and controls

42 lines (32 loc) · 4.72 KB

Gradient Inversion Attacks and Defenses

Here we provide a (growing) list of research papers for gradient inversion attacks and defenses. Please feel free to submit an issue to report any new or missing papers.

Papers for attacks

Recent research shows that sending gradients instead of data in Federated Learning can leak private information. These attacks demonstrate that an adversary eavesdropping on a client’s communications (i.e. observing the global modelweights and client update) can accurately reconstruct a client’s private data using a class of techniques known as “gradient inversion attacks", which raise serious concerns about such privacy leakage.

Attack name Paper Venue Additional Information Other Than Gradients Supported Official implementation
DLG Deep leakage from gradients NeurIPS 2019 No Yes link
iDLG iDLG: Improved Deep Leakage from Gradients Arxiv No Yes link
Inverting Gradients Inverting Gradients -- How easy is it to break privacy in federated learning? NeurIPS 2020 Batch Normalization statistics & private labels Yes link
R-GAP R-GAP: Recursive Gradient Attack on Privacy ICLR 2021 The rank of the rank of the coefficient matrix (see Section 3.1.2 of its paper) No (a relatively weak attack) link
GradInversion See through Gradients: Image Batch Recovery via GradInversion CVPR 2021 Batch Normalization statistics & Good approximation of private labels No (code unavailable) No
GIAS Gradient Inversion with Generative Image Prior NeurIPS 2021 A GAN trained on the dirstribution of training data No (on our plan) link
CAFE CAFE: Catastrophic Data Leakage in Vertical Federated Learning NeurIPS 2021 Batch indicies No (on our plan) link

Papers for defenses

To counter these attacks, researchers have proposed defense mechanisms, including:

  • encrypting gradient such assecure aggregation protocol or homomorphic encryption, which are secure, but require special setups and may introduce substantial overheads,
  • perturbing gradient by adding deferentially private noise or gradient pruning which requires finding tradeoffs between accuracy and privacy leakage,
  • encoding the input such as InstaHide by encoding input data to the model, which also requires finding tradeoffs between accuracy and privacy leakage.

Defenses for plain-text gradients

Defense name Paper Venue Supported Official implementation
DPSGD Deep Learning with Differential Privacy CCS 2016 Yes link
Gradient Pruning Deep leakage from gradients NeurIPS 2019 Yes link
MixUp mixup: Beyond Empirical Risk Minimization ICLR 2018 Yes link
InstaHide InstaHide: Instance-hiding Schemes for Private Distributed Learning ICML 2020 Yes link

Defenses that encrypt gradients

Defense name Paper Venue Official implementation
Secure Aggregation Practical Secure Aggregation for Federated Learning on User-Held Data NeurIPS 2016 No
FastSecAgg FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning CCS 2020 No
LightSecAgg LightSecAgg: Rethinking Secure Aggregation in Federated Learning Arxiv No
Homomorphic Encryption Privacy-Preserving Deep Learning via Additively Homomorphic Encryption ATIS 2017 No