Skip to content
Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"
Python
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
examples Correct attack success rate in examples May 14, 2019
fast_adv Fixing best_dict saving for test Mar 6, 2019
.gitignore Initial commit Nov 26, 2018
LICENSE Initial commit Nov 26, 2018
README.md PyTorch 1.1+ compatibility Sep 17, 2019
requirements-dev.txt Initial commit Nov 26, 2018
requirements.txt Initial commit Nov 26, 2018
setup.py Initial commit Nov 26, 2018

README.md

About

Code for the article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses" (https://arxiv.org/abs/1811.09600), to be presented at CVPR 2019 (Oral presentation)

Implementation is done in PyTorch 0.4.1 and runs with Python 3.6+. The code of the attack is also provided on TensorFlow. This repository also contains an implementation of the C&W L2 attack in PyTorch (ported from Carlini's TF version)

For PyTorch 1.1+, check the pytorch1.1+ branch (scheduler.step() moved).

Installation

This package can be installed via pip as follows:

pip install git+https://github.com/jeromerony/fast_adversarial

Using DDN to attack a model

from fast_adv.attacks import DDN
attacker = DDN(steps=100, device=device)

adv = attacker.attack(model, x, labels=y, targeted=False)

Where model is a pytorch nn.Module that takes inputs x and outputs the pre-softmax activations (logits), x is a batch of images (N x C x H x W) and labels are either the true labels (for targeted=False) or the target labels (for targeted=True). Note: x is expected to be on the [0, 1] range: you can use fast_adv.utils.NormalizedModel to wrap any normalization, such as mean subtraction.

See the "examples" folder for a python and a jupyter notebook example

Adversarial training with DDN

The following commands were used to adversarially train the models:

MNIST:

python -m fast_adv.defenses.mnist --lr=0.01 --lrs=30 --adv=0 --max-norm=2.4 --sn=mnist_adv_2.4

CIFAR-10 (adversarial training starts at epoch 200):

python -m fast_adv.defenses.cifar10 -e=230 --adv=200 --max-norm=1 --sn=cifar10_wrn28-10_adv_1

Adversarially trained models

You can’t perform that action at this time.