Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow a None batch dimension in SPSA for compatibility with generate_… #509

Merged
merged 3 commits into from Sep 21, 2018

Conversation

juesato
Copy link
Contributor

@juesato juesato commented Aug 25, 2018

…np()

Also add a test that the generate_np() interface works properly

Addresses #502

@catherio
Copy link
Contributor

cc @nottombrown

Copy link
Contributor

@nottombrown nottombrown left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after the small changes that I mentioned.

I'm a bit confused by why we have the "batch size must be 1" constraint here in the first place. If it's for performance reasons, perhaps we could put each sample in a tf.while loop?

Doesn't seem super important though, and these changes are definitely improvements so they should be merged in.

@@ -1760,7 +1760,13 @@ def _get_delta(self, x, delta):
def _compute_gradients(self, loss_fn, x, unused_optim_state):
"""Compute gradient estimates using SPSA."""
# Assumes `x` is a list, containing a [1, H, W, C] image
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're [N, H, W, C] now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TF implementation still only supports batch size 1. It's implemented this way just for simplicity - when I used it I didn't need more than batch size 1. There's wouldn't really be any gain in efficiency with batching, since SPSA is internally batching the function evaluations of f(x + delta) for all the different delta values at each iteration.

The TF implementation could of course support a tf.while_loop on the outside to loop over the batch dimension, but that seems overly complex to me, since the computation is really handling each image separately, and it's easy to handle outside TF. I think it makes sense for the numpy interface to support batches, so I changed that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with that logic. lgtm!


def generate_np(self, x_val, **kwargs):
# Add shape check for batch size=1, then call parent class generate_np
assert x_val.shape[0] == 1, 'x_val should be a batch of a single image'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we allow arbitrary batch sizes and then iterate through them in the function here? It seems like the "x_val must be a batch of a single image" constraint will probably be surprising to people.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@juesato
Copy link
Contributor Author

juesato commented Aug 31, 2018

Added support for batch calls to generate_np().

Pure TF interface doesn't support batched calls (see comment above).

@resya173
Copy link

Hi

@npapernot
Copy link
Member

The pep8 checker is failing:

======================================================================
FAIL: Test if pep8 is respected.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/miniconda/envs/test-environment/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/travis/build/tensorflow/cleverhans/cleverhans/devtools/tests/test_format.py", line 45, in test_format_pep8
    raise AssertionError("PEP8 Format not respected")
AssertionError: PEP8 Format not respected
-------------------- >> begin captured stdout << ---------------------
/home/travis/build/tensorflow/cleverhans/cleverhans/attacks.py:1830:80: E501 line too long (80 > 79 characters)
--------------------- >> end captured stdout << ----------------------

@nottombrown
Copy link
Contributor

Lgtm. Thanks @juesato !

@nottombrown nottombrown merged commit 00dee18 into cleverhans-lab:master Sep 21, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants