Gradient Leakage Defense (GradDefense) is a flexible, standalone library that protects privacy from gradient leakage attack with a minimal, easy-to-use API, which integrates and accommodates with off-the-shelf ML training pipelines. The core methods defined in GradDefense include random perturbation, compensation by denoising, and sample-wise gradient clipping [optional]. GradDefense_Walkthrough.ipynb (runs with Jupyter Notebook) provides a walkthrough to quickly understand GradDefense.
If this code is useful in your research, you are encouraged to cite our academic paper:
@inproceedings{wang2022protect,
author={Wang, Junxiao and Guo, Song and Xie, Xin and Qi, Heng},
title = {Protect Privacy from Gradient Leakage Attack in Federated Learning},
booktitle = {IEEE International Conference on Computer Communications (INFOCOM)},
year={2022}
}