-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DownsampleFactorMax support strides: issue #2196 #2222
Conversation
continue | ||
for j in xrange(ds1): | ||
col_ind = col_st + j | ||
if col_ind >= img_cols: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove the if, but using for j in xrange(ds1, ...). ... is the max number of iteration. It is excluded.
I get this error:
|
|
||
def __str__(self): | ||
return '%s{%s,%s}' % (self.__class__.__name__, | ||
self.ds, self.ignore_border) | ||
self.ds, self.st, self.ignore_border) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You forgot to modify the string on line 147. This cause crash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, I would take this opportunity to make this op use the new props = (...) attribute.
Remove hash, eq and str in that op. Add the attribute props = ('ds', 'st', 'ignore_border') to that class. This will cause the 3 methods to be generated automatically.
I wasn't able to reproduce the error you wrote me about c_code in an email. So I guess it was the str problem. If not, give me the error message. thanks |
:type ds: list or tuple of two ints | ||
|
||
:param st: the stride size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add that this is the distance between pooling regions.
Also add that it can be None, and in that case the value provided as ds
will be used, and that it corresponds to the case of adjacent pooling regions.
OK, I did not see your comment about my formula before I reviewed, we should discuss what the right answer should be before trying to implement it. |
nr = 0 | ||
nc = 0 | ||
if isinstance(r, theano.Variable): | ||
nr = tensor.switch(tensor.ge(r - ds[0], 0), out_r, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be simplified in nr = tensor.max(out_r, 0)
if r
is a Variable, and nr = numpy.max(out_r, 0)
otherwise.
Idem for nc
. And you don't have to assign them 0 before.
Otherwise, it looks good, thanks for completing that part. |
You will need to rebase and force-push so that Travis can run. |
@@ -183,10 +232,13 @@ def grad(self, inp, grads): | |||
gz, = grads | |||
maxout = self(x) | |||
return [DownsampleFactorMaxGrad(self.ds, | |||
ignore_border=self.ignore_border)( | |||
ignore_border=self.ignore_border, | |||
st=self.st)( | |||
x, maxout, gz)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before it is tested, can you return theano.gradient.grad_not_implemented
when self.st != self.ds
? That way, we are sure not to give wrong results.
|
||
def c_code(self, node, name, inp, out, sub): | ||
def c_code_tmp(self, node, name, inp, out, sub): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as the forward op, re-enable the C code when st == ds
.
this is for ticket #2196 |
… is greater than the pooling size
95e1afe
to
a925585
Compare
@@ -322,10 +386,14 @@ def infer_shape(self, node, in_shapes): | |||
def grad(self, inp, grads): | |||
x, maxout, gz = inp | |||
ggx, = grads | |||
if self.st != self.ds: | |||
return [theano.gradient.grad_not_implemented(self, 0, x), | |||
theano.gradient.grad_not_implemented(self, 1, maxout) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a missing comma at the end of the line, this is a syntax error.
DownsampleFactorMax support strides: issue #2196
Thanks for completing that part! |
:param ds: factor by which to downscale (vertical ds, horizontal ds). | ||
(2,2) will halve the image in each dimension. | ||
:param ds: factor by which to downscale (vertical ds, horizontal ds). | ||
(2,2) will halve the image in each dimension. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method max_pool_2d need an st parameter. I'll fix it.
Also the GPU opt that move this op the GPU do not take check the new parameter! This introduce a bug. I'll fix it too.
No description provided.