Skip to content

ffhibnese/Model-Inversion-Attack-ToolBox

Repository files navigation

🔥Model Inversion Attack ToolBox v1.0🔥

Python 3.10 Pytorch 2.0.1 torchvision 0.15.2 CUDA 11.8

Yixiang Qiu*, Hongyao Yu*, Hao Fang*, Wenbo Yu, Bin Chen#, Xuan Wang, Shu-Tao Xia

Welcome to MIA! This repository is a comprehensive open-source Python benchmark for model inversion attacks, which is well-organized and easy to get started. It includes uniform implementations of advanced and representative model inversion methods, formulating a unified and reliable framework for a convenient and fair comparison between different model inversion methods.

If you have any concerns about our toolbox, feel free to contact us at qiuyixiang@stu.hit.edu.cn, yuhongyao@stu.hit.edu.cn, and fang-h23@mails.tsinghua.edu.cn.

Also, you are always welcome to contribute and make this repository better!

🚧 MIA v2.0 is coming soon

We are already in the second development stage where the following updates will be implemented soon.

  • More recently emerging attacks
  • Representative defense algorithms
  • MI attacks on graph and language modalities
  • Able to train your generative model
  • Better refactor code with trainer
  • A package that can be installed with pip

🚀 Introduction

Model inversion attack is an emerging powerful private data theft attack, where a malicious attacker is able to reconstruct data with the same distribution as the training dataset of the target model.

The reason why we developed this toolbox is that the research line of MI suffers from a lack of unified standards and reliable implementations of former studies. We hope our work can further help people in this area and promote the progress of their valuable research.

💡 Features

  • Easy to get started.
  • Provide all the pre-trained model files.
  • Always up to date.
  • Well organized and encapsulated.
  • A unified and fair comparison between attack methods.

📝 Model Inversion Attacks

Method Paper Publication Scenario Key Characteristics
DeepInversion Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion CVPR'2020 whitebox student-teacher, data-free
GMI The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks CVPR'2020 whitebox the first GAN-based MIA, instance-level
KEDMI Knowledge-Enriched Distributional Model Inversion Attacks ICCV'2021 whitebox the first MIA that recovers data distributions, pseudo-labels
VMI Variational Model Inversion Attacks NeurIPS'2021 whitebox variational inference, special loss function
SecretGen SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination ECCV'2022 whitebox, blackbox instance-level, data augmentation
BREPMI Label-Only Model Inversion Attacks via Boundary Repulsion CVPR'2022 blackbox boundary repelling, label-only
Mirror MIRROR: Model Inversion for Deep Learning Network with High Fidelity NDSS'2022 whitebox, blackbox both gradient-free and gradient-based, genetic algorithm
PPA Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks ICML'2022 whitebox Initial selection, pre-trained GANs, results selection
PLGMI Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network AAAI'2023 whitebox pseudo-labels, data augmentation, special loss function
C2FMI C2FMI: Corse-to-Fine Black-box Model Inversion Attack TDSC'2023 whitebox, blackbox gradient-free, two-stage
Lomma Re-Thinking Model Inversion Attacks Against Deep Neural Networks CVPR'2023 blackbox special loss, model augmentation
RLBMI Reinforcement Learning-Based Black-Box Model Inversion Attacks CVPR'2023 blackbox reinforcement learning
LOKT Label-Only Model Inversion Attacks via Knowledge Transfer NeurIPS'2023 blackbox surrogate models, label-only

📝 Model Inversion Defenses

Method Paper Publication Key Characteristics
DPSGD Deep Learning with Differential Privacy CCS'2016 add noise on gradient
ViB / MID Improving Robustness to Model Inversion Attacks via Mutual Information Regularization AAAI'2021 variational method, mutual information, special loss function
BiDO Bilateral Dependency Optimization: Defending Against Model-inversion Attacks KDD'2022 special loss function
TL Model Inversion Robustness: Can Transfer Learning Help? CVPR'2024 transfer learning
LS Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks ICLR'2024 label smoothing

🔧 Environments

MIA can be built up with the following steps:

  1. Clone this repository and create the virtual environment by Anaconda.
git clone https://github.com/ffhibnese/Model_Inversion_Attack_ToolBox.git
cd ./Model_Inversion_Attack_ToolBox
conda create -n MIA python=3.10
conda activate MIA
  1. Install the related dependencies:
pip install -r requirements.txt

📄 Datasets and Model Checkpoints

  • For datasets, you can download them according to the file with detailed instructions placed in ./dataset/<DATASET_NAME>/README.md.
  • For pre-trained models, we prepare all the related model weights files in the following link.
    Download pre-trained models here and place them in ./checkpoints/. The detailed file path structure is shown in ./checkpoints_structure.txt.

Genforces models will be automatically downloaded by running the provided scripts.

🐎 Run Examples

Attack

We provide detailed running scripts of attack algorithms in ./attack_scripts/. You can run any attack algorithm simply by the following instruction and experimental results will be produced in ./results/<ATTACK_METHOD>/ by default:

python attack_scripts/<ATTACK_METHOD>.py

For more information, you can read here.

Defense

We provide simple running scripts of defense algorithms in ./defense_scripts/.

To train the model with defense algorithms, you can run

python defense_scripts/<DEFENSE_METHOD>.py

and training infos will be produced in ./results/<DEFENSE_METHOD>/<DEFENSE_METHOD>.log by default.

To evaluate the effectiveness of the defense, you can attack the model by running

python defense_scripts/<DEFENSE_METHOD>_<ATTACK_METHOD>.py

and attack results will be produced in ./results/<DEFENSE_METHOD>_<ATTACK_METHOD> by default.

For more information, you can read here.

📔 Citation

If you find our work helpful for your research, please kindly cite our paper:

@misc{fang2024privacy,
      title={Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses}, 
      author={Hao Fang and Yixiang Qiu and Hongyao Yu and Wenbo Yu and Jiawei Kong and Baoli Chong and Bin Chen and Xuan Wang and Shu-Tao Xia},
      year={2024},
      eprint={2402.04013},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

✨ Acknowledgement

We express great gratitude for all the researchers' contributions to the Model Inversion community.

In particular, we thank the authors of PLGMI for their high-quality codes for datasets, metrics, and three attack methods. It's their great devotion that helps us make MIA better!

About

A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages