Skip to content

Latest commit

 

History

History
63 lines (40 loc) · 2.71 KB

README.md

File metadata and controls

63 lines (40 loc) · 2.71 KB

Variance Tuning

This repository contains code to reproduce results from the paper:

Enhancing the Transferability of Adversarial Attacks through Variance Tuning (CVPR 2021)

Xiaosen Wang, Kun He

We also include the torch version code in the framework TransferAttack.

Requirements

  • Python >= 3.6.5
  • Tensorflow >= 1.12.0
  • Numpy >= 1.15.4
  • opencv >= 3.4.2
  • scipy > 1.1.0
  • pandas >= 1.0.1
  • imageio >= 2.6.1

Qucik Start

Prepare the data and models

You should download the data and pretrained models and place the data and pretrained models in dev_data/ and models/, respectively.

Variance Tuning Attack

All the provided codes generate adversarial examples on inception_v3 model. If you want to attack other models, replace the model in graph and batch_grad function and load such models in main function.

Runing attack

Taking vmi_di_ti_si_fgsm attack for example, you can run this attack as following:

CUDA_VISIBLE_DEVICES=gpuid python vmi_di_ti_si_fgsm.py 

The generated adversarial examples would be stored in directory ./outputs. Then run the file simple_eval.py to evaluate the success rate of each model used in the paper:

CUDA_VISIBLE_DEVICES=gpuid python simple_eval.py

EVaulations setting for Table 4

  • HGD, R&P, NIPS-r3: We directly run the code from the corresponding repo.
  • Bit-Red: step_num=4, alpha=200, base_model=Inc_v3_ens.
  • JPEG: No extra parameters.
  • FD: resize to 304*304 for FD and then resize back to 299*299, base_model=Inc_v3_ens
  • ComDefend: resize to 224*224 for ComDefend and then resize back to 299*299, base_model=Resnet_101
  • RS: noise=0.25, N=100, skip=100
  • NRP: purifier=NRP, dynamic=True, base_model=Inc_v3_ens

More details in third_party

Acknowledgments

Code refers to SI-NI-FGSM.

Contact

Questions and suggestions can be sent to xswanghuster@gmail.com.