New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Foolbox for non-image inputs? #80
Comments
Hi @EdwardRaff, sorry for not responding earlier. Somehow, I accidentally marked this issue as read. In principle, it should be no problem to use Foolbox in this scenario. Image-specific things like the |
@EdwardRaff can this be closed? |
I don't think so. I just tried using the library as you state, and get an unhelpful error message. Traceback (most recent call last):
File "fool_malconv.py", line 108, in <module>
adversarial = attack(batch, 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 89, in __call__
find = Adversarial(model, criterion, image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 61, in __init__
self.predictions(original_image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 235, in predictions
assert not strict or self.in_bounds(image)
AssertionError where the code that does the work fmodel = PyTorchModel(model, bounds=(0, 255), num_classes=2, cuda=args.num_gpus > 0)
attack = foolbox.attacks.FGSM(fmodel)
adversarial = attack(batch, 1) |
Looks like that error was for having too lose a bound, but still results in an error Traceback (most recent call last):
File "fool_malconv.py", line 108, in <module>
adversarial = attack(batch, 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 89, in __call__
find = Adversarial(model, criterion, image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 61, in __init__
self.predictions(original_image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 238, in predictions
predictions = self.__model.predictions(image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/base.py", line 122, in predictions
return np.squeeze(self.batch_predictions(image[np.newaxis]), axis=0)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/pytorch.py", line 64, in batch_predictions
predictions = self._model(images)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/edraff/Development/SeveringMalConv/web/model.py", line 73, in forward
x = self.lookup_table(x)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 94, in forward
)(input, self.weight)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/thnn/sparse.py", line 33, in forward
assert indices.dim() <= 2
AssertionError |
Could you please provide the full code or a minimal example to reproduce the problem. Also, you are feeding a variable called |
My data is byte based, so there are 256 possible byte values (and a special EOF token that I think it was unhappy about). Unfortunately, I'm not allowed to share the code at this moment in time. I was doing batches but a batch size of 1. I used numpy.squeeze to remove the first dimension, and now get this error. Traceback (most recent call last):
File "fool_malconv.py", line 110, in <module>
adversarial = attack(np.squeeze(batch,axis=0), 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 89, in __call__
find = Adversarial(model, criterion, image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 61, in __init__
self.predictions(original_image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 238, in predictions
predictions = self.__model.predictions(image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/base.py", line 122, in predictions
return np.squeeze(self.batch_predictions(image[np.newaxis]), axis=0)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/pytorch.py", line 65, in batch_predictions
predictions = predictions.data
AttributeError: 'tuple' object has no attribute 'data' |
Which Foolbox and PyTorch version are you using? Is your PyTorch model (the It looks like the |
Foolbox currently assumes that the A very simple net (used in our test cases) can for example be implemented like this. It doesn't do anything useful, but shows that things should be fine as long as your class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
def forward(self, x):
x = torch.mean(x, 3)
x = torch.squeeze(x, dim=3)
x = torch.mean(x, 2)
x = torch.squeeze(x, dim=2)
logits = x
return logits |
I'm using PyTorch version 1.12 and the latest version of foolbox from pip. The information about what is expected is very useful. Please consider adding it to the documentation. I'd also encourage some error messages that indicate what the problem is. I've adjusted my code to match your stated exceptions. But I'm still having some trouble. If i return the logits as an array of shape (2) (1 dim, 2 values for the 2 classes), I get an error in batch_predictions asserting that the predictions.ndim == 2. Traceback (most recent call last):
File "fool_malconv.py", line 116, in <module>
adversarial = attack(np.squeeze(batch,axis=0), 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 89, in __call__
find = Adversarial(model, criterion, image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 61, in __init__
self.predictions(original_image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 238, in predictions
predictions = self.__model.predictions(image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/base.py", line 122, in predictions
return np.squeeze(self.batch_predictions(image[np.newaxis]), axis=0)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/pytorch.py", line 69, in batch_predictions
assert predictions.ndim == 2
AssertionError If I return an array of shape (1,2), i get an error in predictions_and_gradient asserting that the image dimension is == 3. Traceback (most recent call last):
File "fool_malconv.py", line 116, in <module>
adversarial = attack(np.squeeze(batch,axis=0), 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 98, in __call__
_ = self._apply(adversarial, **kwargs)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/gradientsign.py", line 22, in _apply
gradient = a.gradient()
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 322, in gradient
gradient = self.__model.gradient(image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/base.py", line 208, in gradient
_, gradient = self.predictions_and_gradient(image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/pytorch.py", line 89, in predictions_and_gradient
assert image.ndim == 3
AssertionError |
The second one is correct. The PyTorch model (i.e. the The error you see in that case is indeed in image-specific assertion that should not be there. I will change this as soon as possible, but it will take a bit of time (running all the tests, publishing a new version on PyPI, etc.)… For now, it would be great if you could just open the file Python 3.6.3 (default, Oct 6 2017, 08:44:35)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import foolbox
>>> foolbox
<module 'foolbox' from 'SOME PATH TO foolbox/__init__.py'> Just go to that path and edit the two lines I mentioned. Let me know if this fixed the problem. Sorry for all the trouble. Originally, we developed Foolbox in a very image-centric way. As I said, this will change and for TensorFlow I think I verified that it works, but apparently, for PyTorch these assertions were still left. Next time it would great if you could create a minimal example that reproduces the error – it doesn't need to be your actual secret code ;-) Regarding the documentation: yes, it has to improve ;-) |
No worries, I appreciate all the help! I made the changes you mentioned, and it now gets to what I was afraid of: what to do about the embedding layer. This is the error the comes out Traceback (most recent call last):
File "fool_malconv.py", line 116, in <module>
adversarial = attack(np.squeeze(batch,axis=0), 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 98, in __call__
_ = self._apply(adversarial, **kwargs)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/gradientsign.py", line 22, in _apply
gradient = a.gradient()
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 322, in gradient
gradient = self.__model.gradient(image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/base.py", line 208, in gradient
_, gradient = self.predictions_and_gradient(image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/pytorch.py", line 95, in predictions_and_gradient
predictions = self._model(images)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/edraff/Development/SeveringMalConv/web/model_infer.py", line 77, in forward
x = self.lookup_table(x)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 94, in forward
)(input, self.weight)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/thnn/sparse.py", line 34, in forward
assert not self.needs_input_grad[0], "Embedding doesn't " \
AssertionError: Embedding doesn't compute the gradient w.r.t. the indices Yea, I know a simplified example would help. Unfortunately I don't get to fully dictate my time :-/ I'm working on getting approval to open source the code I'm working on as its not significant, but that takes time. |
I also tried the LocalSearchAttack since it seems to be non-gradient based, but it errors on line 82 asserting that the image has two axes. And the axes object is empty. Traceback (most recent call last):
File "fool_malconv.py", line 117, in <module>
adversarial = attack(np.squeeze(batch,axis=0), 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 98, in __call__
_ = self._apply(adversarial, **kwargs)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/localsearch.py", line 87, in _apply
assert len(axes) == 2
AssertionError |
Well, if your model isn't differentiable w.r.t. its inputs, it's hard to apply a gradient-based adversarial attack. The problem with the LocalSearchAttack is: it has built-in notion of pixels and is therefore image-specific. I do think it should be possible to generalize the attack to arbitrary inputs, but I haven't thought about it in detail. Maybe you want to give it a try? Alternatively, it might be possible to change the shape of your data to something that looks like an image, e.g. an image with width 1, height N and 1 color channel. Again, no guarantees, but I think that should be easy and might work. Finally, you can use a decision-based attack: it requires even less than the |
That is what I was asking about in my first post since there is an embedding layer. I wasn't sure how it would be handled. I was able to get it to work with the Traceback (most recent call last):
File "fool_malconv.py", line 119, in <module>
adversarial = attack(np.squeeze(batch,axis=0), 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/boundary_attack.py", line 165, in __call__
threaded_gen=threaded_gen)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 98, in __call__
_ = self._apply(adversarial, **kwargs)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/boundary_attack.py", line 183, in _apply
return self._apply_inner(pool, *args, **kwargs)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/boundary_attack.py", line 206, in _apply_inner
assert external_dtype in [np.float32, np.float64]
AssertionError I also tried it with the GPU since the error was so strange, and got this even more confusing error. It looks like it got what it was expecting. Traceback (most recent call last):
File "fool_malconv.py", line 119, in <module>
adversarial = attack(np.squeeze(batch,axis=0), 1)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/boundary_attack.py", line 165, in __call__
threaded_gen=threaded_gen)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/attacks/base.py", line 89, in __call__
find = Adversarial(model, criterion, image, label)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 61, in __init__
self.predictions(original_image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/adversarial.py", line 238, in predictions
predictions = self.__model.predictions(image)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/base.py", line 122, in predictions
return np.squeeze(self.batch_predictions(image[np.newaxis]), axis=0)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/foolbox/models/pytorch.py", line 64, in batch_predictions
predictions = self._model(images)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/edraff/Development/SeveringMalConv/web/model_infer.py", line 79, in forward
x = self.lookup_table(x)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 94, in forward
)(input, self.weight)
File "/home/edraff/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/thnn/sparse.py", line 53, in forward
output = torch.index_select(weight, 0, indices.view(-1))
TypeError: torch.index_select received an invalid combination of arguments - got (torch.cuda.FloatTensor, int, torch.LongTensor), but expected (torch.cuda.FloatTensor source, int dim, torch.cuda.LongTensor index) |
If your model requires inputs to be integers, it's hard to apply any of these attacks directly. Sorry if I missed that detail in your first message. I think it should still be possible to do what you want, but one first needs to define adversarials in your setting properly and then think about how they could be found. Technically, it might be possible that you define a model that accepts floats and rounds them to integers and then applying the attacks to that model. Without your code and a description of the scenario, I don't think I can be much of a help here. You might also want to get familiar with Foolbox in a more common setting (say images or even audio data) first and then think about the differences to your problem. |
Hi,@jonasrauber. I meet several problems that confused me. My python is Python 3.6.3 on Windows64, and my computer can not connect the Internet. So I need your help. Can you give me an example? This example shows how to use foolbox to generate adversarial example for MNIST. If you can give me the code is better. Hoping the code is shorter and shows explanation. Thanks. |
Hi, I read through the docs and issues, and couldn't find any information about this.
I want to generate adversarial examples for some non-image based problems. In my specific case, the inputs are fixed length sequence of integers which then goes into an embedding layer and into the network.
Is there a way to use foolbox in this scenario at the moment?
Thanks for your time!
The text was updated successfully, but these errors were encountered: