New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Samples #81
Comments
Have a look at the sample code in the README. In the case of 100 images and fast attacks like FGSM and DeepFool, basically all you need is to put a loop around the last line: import foolbox
import keras
import numpy as np
from keras.applications.resnet50 import ResNet50
# instantiate model
keras.backend.set_learning_phase(0)
kmodel = ResNet50(weights='imagenet')
preprocessing = (np.array([104, 116, 123]), 1)
fmodel = foolbox.models.KerasModel(kmodel, bounds=(0, 255), preprocessing=preprocessing)
# get source image and label
image, label = foolbox.utils.imagenet_example()
# apply attack on source image
attack = foolbox.attacks.FGSM(fmodel)
fgsm_adversarials = []
for image, label in zip(some_training_images, corresponding_labels):
adversarial = attack(image[:,:,::-1], label)
fgsm_adversarials.append(adversarial) If you want to do this for a much larger set of images or on the fly during training, Foolbox might not be the right tool – performance is not it's focus. The power of Foolbox is a large set of attacks that makes it easy to reliably test the robustness of models. |
Makes sense. Thanks for FAST response. Once Foolbox exposes fagility of my models which I expect it (this is a FANTASTIC tool) I want to re-train on adversarial samples and re-test. Also what is this code doing Thanks again for your help!! |
That's just part of the preprocessing expected by the ResNet implementation in Keras, i.e. it expects BGR color channel ordering and channel mean subtraction (done a few lines before that). |
@pGit1 can this be closed? |
Absolutely!! Thank you!!! |
Can foolbox be used on a model that I trained on a different domain than imagenet? Will the Keras attack take as input any Keras mode I build? |
Awesome! Thank you so much!
…On Tue, Mar 13, 2018 at 3:00 PM, Jonas Rauber ***@***.***> wrote:
@pGit1 <https://github.com/pgit1> Foolbox can be used to attack any
machine learning model, nothing is specific to ImageNet. The
foolbox.models.KerasModel model wrapper for Keras models should be able
to handle any Keras model that follows the conventions of the
keras.models.Model class <https://keras.io/models/model/#model-class-api>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#81 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ANU-SoVTOadDQJsphn95ZGOvqMC8z5R3ks5teBdGgaJpZM4Qyfed>
.
|
How create training samples from adversarially perturbed original training sampels?
To be simple, suppose I had 100 training images and wanted to use Deep Fool and FGSM to perturb these samples, I should now end up with 200 adversarial samples and 100 originals to train on. How to go about this in the most efficient way with this library?
Sample code very much appreciated! :D
The text was updated successfully, but these errors were encountered: