Privacy Testing for Deep Learning
-
Updated
Jul 20, 2023 - Python
Privacy Testing for Deep Learning
A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
reveal the vulnerabilities of SplitNN
Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021
Add a description, image, and links to the model-inversion topic page so that developers can more easily learn about it.
To associate your repository with the model-inversion topic, visit your repo's landing page and select "manage topics."