Source code for Learning Transferable Adversarial Examples via Ghost Networks
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data remove useless code Jan 31, 2019
nets code complete Jan 31, 2019
networks
.gitignore format Jan 31, 2019
LICENSE Initial commit Jan 31, 2019
README.md add contact and reference Feb 6, 2019
attack.py clean with bug Jan 31, 2019
config.py code complete Jan 31, 2019
data.py up to date, and compatible with python3 Jan 31, 2019
demo.png README and LICENSE Jan 31, 2019
eval.py clean with bug Jan 31, 2019
pipeline.sh clean up code without carefully check Jan 31, 2019

README.md

Learning Transferable Adversarial Examples via Ghost Networks

Introduction

This repository contains the code for paper Learning Transferable Adversarial Examples via Ghost Networks. In this paper, we propose Ghost Networks to efficiently learn transferable adversarial examples. The key principle of ghost networks is to perturb an existing model, which potentially generates a huge set of diverse models. Those models are subsequently fused by longitudinal ensemble. Both steps almost require no extra time and space consumption. Experiment shows this method could consistently gain additional transferability for iteration-based method (such as I-FGSM and MI-FGSM). demo

Extension

To improve the transferability further, we

Usage

Dependencies

  • Anaconda
  • Python3.6
  • Tensorflow 1.10.0
  • Tensorpack 0.9.0.1
  • easydict
  • scipy
  • pillow

Here is a sample scrip to install Dependencies after you have Anaconda.

conda create -n python3 python=3.6
source activate python3
pip install --upgrade tensorflow-gpu
pip install --upgrade git+https://github.com/tensorpack/tensorpack.git
pip install easydict
conda install -c anaconda scipy
pip install pillow

Dataset and model checkpoints

We use images from ImageNet LSVRC 2012 Validation Set and resized them to 299x299. You can download the preprocessed images HERE if you accept the terms.

We use 6 clean trained models (Inception-{v3, v4}, Resnet-v2-{50, 101, 152}, Inception-Resnet-v2) and 3 ensemble adversarial trained models (ens3_inception_v3, ens4_inception_v3, ens_inception_resnet_v2). We original download them from here and here and then slightly modified the tensor name. You can download the modified checkpoints from HERE.

After download them, edit and use data/link_to_data.sh to build soft link data/checkpoints and data/val_data by

bash data/link_to_data.sh

We assign every network with an id, so that they can be shortly mentioned in one character. Here is a table to provide ids for each network. You can see line 58 to 62 of config.py for more details.

ID 0 1 2 3 4 5 6 7 8
Network Name IncV3 IncV4 Res50 Res101 Res152 IncRes Ens3IncV3 Ens3IncV4 EnsIncRes

Attack and Eval Examples

This section provide some examples, you can check config.py for more options.

Basic FGSM, I-FGSM, and MI-FGSM

FGSM attack inception_v3(ID=0) and evaluate success rate

bash pipeline.sh --exp FGSM --attack_network 0 --num_steps 1 --max_epsilon 8.0 --step_size 8.0 --GPU_ID 0

I-FGSM attack inception_v3(ID=0) and evaluate success rate

bash pipeline.sh --exp I-FGSM --attack_network 0 --GPU_ID 0

MI-FGSM attack inception_v3(ID=0) and evaluate success rate

bash pipeline.sh --exp MI-FGSM --attack_network 0 --momentum 1.0 --GPU_ID 0

Our proposed method

MI-FGSM attack inception_v3(ID=0) with dropout erosion (with the optimal keep_prob) and evaluate success rate

# these two line of scripts are same, since the optimal keep_prob for inception_v3 is 0.994
bash pipeline.sh --exp MI-FGSM-0.994 --attack_network 0 --momentum 1.0 --keep_prob 0.994 --GPU_ID 0
bash pipeline.sh --exp MI-FGSM-optimal --attack_network 0 --momentum 1.0 --optimal --GPU_ID 0

MI-FGSM attack resnet_v2_50(ID=2) with residual erosion (with the optimal Lambda) and evaluate success rate

# these two line of scripts are same, since the optimal Lambda for resnet_v2_50 is 0.22
bash pipeline.sh --exp MI-FGSM --attack_network 2 --momentum 1.0 --random_range 0.22 --GPU_ID 0
bash pipeline.sh --exp MI-FGSM --attack_network 2 --momentum 1.0 --optimal --GPU_ID 0

Attack multiple networks (Ensemble attack, Liu et al)

Simply put all network ids to the parameter of --attack_network, if your GPU memory is not enough, reduce the --batch_size.

For example, attack the ensemble of inception_v3 and inception_v4 with batch_size=2

bash pipeline.sh --exp FGSM --attack_network 01 --num_steps 1 --max_epsilon 8.0 --step_size 8.0 --batch_size 2 --GPU_ID 0

Trouble Shooting

Traceback (most recent call last):
  File "attack.py", line 5, in <module>
    from config import config as FLAGS
  File "/home/yingwei/lyw/mount_point/ghost-network/config.py", line 81, in <module>
    assert config.overwrite or config.skip, "{:s}".format(config.result_dir)
AssertionError: result/I-FGSM...

This Assertion Raise due to config.result_dir already exists, mainly due to you run the same scripts twice. You can either

  1. remove that director
  2. add option --overwrite to ignore this issue

If you find the code useful, please consider citing the following paper.

@article{li2018learning,
  title={Learning Transferable Adversarial Examples via Ghost Networks},
  author={Li, Yingwei and Bai, Song and Zhou, Yuyin and Xie, Cihang and Zhang, Zhishuai and Yuille, Alan},
  journal={arXiv preprint arXiv:1812.03413},
  year={2018}
}

If you encounter any problems or have any inquiries, please contact us at yingwei.li@jhu.edu.