Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature Link
- python 3.9
- torch 1.8
- pretrainedmodels 0.7
- numpy 1.19
- pandas 1.2
-
Prepare models
Download pretrained PyTorch models here, which are converted from widely used Tensorflow models. Then put these models into
./models/
-
**Generate adversarial examples by SVD under inception-v3 ** -
# Implement MI-FGSM, DI-FGSM, TI-FGSM or TI-DIM CUDA_VISIBLE_DEVICES=gpuid python MI_FGSM.py # Implement PI-FGSM or PI-TI-DI-FGSM CUDA_VISIBLE_DEVICES=gpuid python PI_FGSM.py # Implement SI_NI_FGSM, SI_NI_TI-DIM CUDA_VISIBLE_DEVICES=gpuid python SI_NI_FGSM.py # Implement VT_MI_FGSM CUDA_VISIBLE_DEVICES=gpuid python VT_MI_FGSM.py # Implement S2I_FGSM or S2I_TI_DIM CUDA_VISIBLE_DEVICES=gpuid python S2I_FGSM.py
where
gpuid
can be set to any free GPU ID in your machine. And adversarial examples will be generated in directory./adv_img
. -
Evaluations on normally trained models
Running
verify.py
to evaluate the attack success ratepython verify.py
-
Evaluations on defenses
To evaluate the attack success rates on defense models, we test eight defense models which contain three adversarial trained models (Inc-v3ens3, Inc-v3ens4, IncRes-v2ens) and six more advanced models (HGD, R&P, NIPS-r3, RS, JPEG, NRP).
- Inc-v3ens3,Inc-v3ens4,IncRes-v2ens: You can directly run
verify.py
to test these models. - HGD, R&P, NIPS-r3: We directly run the code from the corresponding official repo.
- RS: noise=0.25, N=100, skip=100. Download it from corresponding official repo.
- JPEG: Refer to here.
- NRP: purifier=NRP, dynamic=True, base_model=Inc-v3ens3. Download it from corresponding official repo.
- Inc-v3ens3,Inc-v3ens4,IncRes-v2ens: You can directly run