-
-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytorch gradient attack #91
Comments
self._process_gradient simply backprops the gradient through the preprocessing step (subtraction of the mean, scaling). |
This line is for backprops. foolbox/foolbox/models/pytorch.py Line 98 in 021f02a
The point it is why do we need: foolbox/foolbox/models/base.py Line 75 in 021f02a
Based on: foolbox/foolbox/attacks/gradient.py Line 30 in 021f02a
The estimated grad is applied to the original image. Why do we need to do the preprocessing on the grad then? |
See, the processing pipeline is this: original image -> preprocessed image -> model output. Most importantly, the model only sees the preprocessed image! Hence, loss.backward() gives you the gradient with respect to the preprocessed image. self._process_gradient takes this gradient and returns the gradient with respect to the original image. If there is no preprocessing then the two are identical. |
@wielandbrendel I got it! Thanks! |
foolbox/foolbox/models/pytorch.py
Line 114 in 021f02a
Why do we need to have the grad divided by the std? I do not think it is necessary.
The text was updated successfully, but these errors were encountered: