New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assert output size of max_pooling_2d in GPU cases #2589

Merged
merged 4 commits into from Apr 19, 2017

Conversation

Projects
None yet
2 participants
@soramichi
Contributor

soramichi commented Apr 18, 2017

In the CPU code of max_pooling_2d, the size of the expected output is checked before calculation (in chainer/utils/conv.py) so that the output never becomes a 0-dimentional tensor.

However, this check is missing in the GPU code (with and without cudnn) and thus passing an invalid input to max_pooling_2d.py yields a very un-friendly error as exampled below.

This PR adds assertions that check the output size of max_pooling_2d for GPU cases, and also improves the tests to check the added behavior.

Sample output before this PR is applied:

>>> import chainer
>>> x_data = chainer.cuda.cupy.random.rand(4, 4, 1, 4)
>>> x = chainer.Variable(x_data)
>>> chainer.functions.max_pooling_2d(x, 3, stride=2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/soramichi/src/anaconda3/lib/python3.6/site-packages/chainer/functions/pooling/max_pooling_2d.py", line 175, in max_pooling_2d
    return MaxPooling2D(ksize, stride, pad, cover_all, use_cudnn)(x)
  File "/home/soramichi/src/anaconda3/lib/python3.6/site-packages/chainer/function.py", line 199, in __call__
    outputs = self.forward(in_data)
  File "/home/soramichi/src/anaconda3/lib/python3.6/site-packages/chainer/function.py", line 310, in forward
    return self.forward_gpu(inputs)
  File "/home/soramichi/src/anaconda3/lib/python3.6/site-packages/chainer/functions/pooling/max_pooling_2d.py", line 32, in forward_gpu
    return super(MaxPooling2D, self).forward_gpu(x)
  File "/home/soramichi/src/anaconda3/lib/python3.6/site-packages/chainer/functions/pooling/pooling_2d.py", line 65, in forward_gpu
    y_desc = cudnn.create_tensor_descriptor(y)
  File "/home/soramichi/src/anaconda3/lib/python3.6/site-packages/cupy/cudnn.py", line 78, in create_tensor_descriptor
    cudnn.setTensor4dDescriptor(desc.value, format, data_type, *arr.shape)
  File "cupy/cuda/cudnn.pyx", line 444, in cupy.cuda.cudnn.setTensor4dDescriptor (cupy/cuda/cudnn.cpp:2967)
  File "cupy/cuda/cudnn.pyx", line 449, in cupy.cuda.cudnn.setTensor4dDescriptor (cupy/cuda/cudnn.cpp:2833)
  File "cupy/cuda/cudnn.pyx", line 392, in cupy.cuda.cudnn.check_status (cupy/cuda/cudnn.cpp:2017)
cupy.cuda.cudnn.CuDNNError: CUDNN_STATUS_BAD_PARAM: b'CUDNN_STATUS_BAD_PARAM'

@okuta okuta self-assigned this Apr 18, 2017

@okuta

LGTM except comments!

@@ -75,6 +87,34 @@ def test_forward_gpu(self):
def test_forward_gpu_no_cudnn(self):
self.check_forward(cuda.to_gpu(self.x), False)
@attr.gpu
@condition.retry(3)

This comment has been minimized.

@okuta

okuta Apr 18, 2017

Member

This statement is redundant.
Please remove this.

functions.max_pooling_2d(x, 3, stride=2, use_cudnn=False)
@attr.cudnn
@condition.retry(3)

This comment has been minimized.

@okuta

okuta Apr 18, 2017

Member

This statement is redundant.
Please remove this.

@soramichi

This comment has been minimized.

Contributor

soramichi commented Apr 18, 2017

Thank you for the review.

I removed the decorators as suggested, but I don't really know why they are not required. Is it because the test cases do not use to_gpu??

@soramichi

This comment has been minimized.

Contributor

soramichi commented Apr 18, 2017

Oh sorry, I guess you meant @condition.retry(3) is redundant because the added tests are deterministic, but your comment was not about @gpu and @cudnn.

I re-introduced @gpu and @cudnn, and now only @condition.retry(3) is removed.

@okuta okuta added this to the v1.24.0 milestone Apr 19, 2017

@okuta

This comment has been minimized.

Member

okuta commented Apr 19, 2017

LGTM!

@okuta okuta merged commit f35678a into chainer:master Apr 19, 2017

4 checks passed

continuous-integration/appveyor/branch AppVeyor build succeeded
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details

@soramichi soramichi deleted the soramichi:check_maxpool_outsize branch Apr 19, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment