Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Samples #81

Closed
pGit1 opened this issue Dec 1, 2017 · 8 comments
Closed

Training Samples #81

pGit1 opened this issue Dec 1, 2017 · 8 comments

Comments

@pGit1
Copy link

pGit1 commented Dec 1, 2017

How create training samples from adversarially perturbed original training sampels?

To be simple, suppose I had 100 training images and wanted to use Deep Fool and FGSM to perturb these samples, I should now end up with 200 adversarial samples and 100 originals to train on. How to go about this in the most efficient way with this library?

Sample code very much appreciated! :D

@jonasrauber
Copy link
Member

Have a look at the sample code in the README. In the case of 100 images and fast attacks like FGSM and DeepFool, basically all you need is to put a loop around the last line:

import foolbox
import keras
import numpy as np
from keras.applications.resnet50 import ResNet50

# instantiate model
keras.backend.set_learning_phase(0)
kmodel = ResNet50(weights='imagenet')
preprocessing = (np.array([104, 116, 123]), 1)
fmodel = foolbox.models.KerasModel(kmodel, bounds=(0, 255), preprocessing=preprocessing)

# get source image and label
image, label = foolbox.utils.imagenet_example()

# apply attack on source image
attack = foolbox.attacks.FGSM(fmodel)

fgsm_adversarials = []
for image, label in zip(some_training_images, corresponding_labels):
    adversarial = attack(image[:,:,::-1], label)
    fgsm_adversarials.append(adversarial)

If you want to do this for a much larger set of images or on the fly during training, Foolbox might not be the right tool – performance is not it's focus. The power of Foolbox is a large set of attacks that makes it easy to reliably test the robustness of models.

@pGit1
Copy link
Author

pGit1 commented Dec 1, 2017

Makes sense. Thanks for FAST response. Once Foolbox exposes fagility of my models which I expect it (this is a FANTASTIC tool) I want to re-train on adversarial samples and re-test.

Also what is this code doing image[:,:,::-1] ? Why do we need to go in reverse order on the channel axis of the image? Trying to figure out intuition.

Thanks again for your help!!

@jonasrauber
Copy link
Member

That's just part of the preprocessing expected by the ResNet implementation in Keras, i.e. it expects BGR color channel ordering and channel mean subtraction (done a few lines before that).

@jonasrauber
Copy link
Member

@pGit1 can this be closed?

@pGit1
Copy link
Author

pGit1 commented Dec 1, 2017

Absolutely!! Thank you!!!

@pGit1 pGit1 closed this as completed Dec 1, 2017
@pGit1
Copy link
Author

pGit1 commented Mar 13, 2018

@jonasrauber

Can foolbox be used on a model that I trained on a different domain than imagenet? Will the Keras attack take as input any Keras mode I build?

@jonasrauber
Copy link
Member

@pGit1 Foolbox can be used to attack any machine learning model, nothing is specific to ImageNet. The foolbox.models.KerasModel model wrapper for Keras models should be able to handle any Keras model that follows the conventions of the keras.models.Model class.

@pGit1
Copy link
Author

pGit1 commented Mar 26, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants