Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch 0.4.0 compatibility and binary search option for (Iterative)Gradient(Sign)Attack #147

Closed
wants to merge 0 commits into from

Conversation

cjsg
Copy link
Contributor

@cjsg cjsg commented May 8, 2018

This PR implements 4 things:

  1. A pytorch 0.4.0 compatibility to avoid getting a pytorch warning because of the deprecated use of volatile=True in foolbox/models/pytorch.py
  2. An early_stop option for IterativeGradient(Sign)Attack
  3. A decrease_if_first flag for IterativeGradient(Sign)Attack, similar fo Gradient(Sign)Attack
  4. A binary search option for (Iterative)Gradient(Sign)Attack to speed up the (so far linear) search over epsilon. NB: this option requires early_stop=True.

@coveralls
Copy link

coveralls commented May 8, 2018

Coverage Status

Coverage decreased (-0.3%) to 99.7% when pulling 04d4082 on cjsg:master into 1b69c9d on bethgelab:master.

@wielandbrendel
Copy link
Member

Thanks so much @cjsg ! Would you mind adding a simple test for the binary search for both gradient and gradientsign? Then the coverage should recover.

@jonasrauber : should we keep pytorch 0.3 compatibility? I am rather inclined to just demand the latest pytorch version, I am not sure why anyone would stay with v0.3.

@jonasrauber
Copy link
Member

Thanks @cjsg, looking into it now!

@wielandbrendel In principle I would be fine with dropping support for older PyTorch versions because those users could still use old Foolbox versions, though it could make things more difficult when evaluating old PyTorch models because one would need to downgrade Foolbox. So I guess my question for @cjsg and other PyTorch users is: has the PyTorch community already mostly switched to PyTorch 0.4.0 and are new models released with 0.4.0 compatibility?

In any case, the .travis.yml config should be updated to use PyTorch 0.4.0 as part of this pull request.

Copy link
Member

@jonasrauber jonasrauber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it a lot and only have some very minor requests for changes, then I am happy to merge it. See my comments in the code + travis upgrade + 100% code coverage (no coverage mark where absolutely necessary).

In the future, however, I think we need to clean up, unify and simplify the code of the attacks, reduce code duplication, etc.; not just of these attacks but in general.

images = Variable(images, volatile=True)
predictions = self._model(images)
predictions = predictions.data
if torch.__version__[:3] < '0.4':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This version number comparison could break in the future (e.g. version 0.10.0; given the announcement of v1.0, this will probably never be the case, but still…)

@@ -17,7 +18,7 @@ class GradientAttack(Attack):

@call_decorator
def __call__(self, input_or_adv, label=None, unpack=True,
epsilons=1000, max_epsilon=1):
epsilons=1000, max_epsilon=1, bin_search=False):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we rename this to binary_search?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not just here, of course


_, is_adversarial = a.predictions(perturbed)

iterator.is_adversarial = is_adversarial
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using a generator and its send method might be more pythonic, but I think it's fine ;-)

@wielandbrendel
Copy link
Member

Where are we standing with this? The problem here seems to be that this PR tries to do to much. @cjsg could you split this PR into two parts, one for Pytorch 0.4.0 and the other for the binary search?

@jonasrauber
Copy link
Member

@wielandbrendel I've added PyTorch 0.4.0 support separately from this pull-request (required some more changes than the ones here); it's already released

@cjsg you are welcome to do what Wieland proposed, but I have some local re-implementations of parts of the ideas in this pull-request and some other things (e.g. batching) affecting these attacks, so I am not sure if it's worth spending larger amounts of time on this right now ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants