New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove use_cudnn argument from SpetialTransformerGrid/Sampler #2955

Merged
merged 6 commits into from Sep 11, 2017

Conversation

Projects
None yet
5 participants
@mitmul
Copy link
Member

mitmul commented Jul 3, 2017

SpetialTransformerGrid and SpatialTransofrmerSampler are using use_cudnn flags to switch cudnn and non-cudnn implementation, so it ignores global/local configuration. Those use_cudnn should be replaced with chainer.should_use_cudnn('>=auto').

@unnonouno unnonouno added the no-compat label Jul 3, 2017

@unnonouno

This comment has been minimized.

Copy link
Member

unnonouno commented Jul 3, 2017

It breaks interface.

@@ -34,7 +34,7 @@ def forward_cpu(self, inputs):
return self._forward(inputs)

def forward_gpu(self, inputs):
if not (cuda.cudnn_enabled and self.use_cudnn and
if not (cuda.cudnn_enabled and chainer.should_use_cudnn('>=auto') and

This comment has been minimized.

@yuyu2172

yuyu2172 Jul 3, 2017

Contributor

I think cuda.cudnn_enabled is not necessary because chainer.should_use_cudnn is always False when cuDNN is not installed.
https://github.com/chainer/chainer/blob/master/chainer/__init__.py#L97

Also, the version of cuDNN can be checked by lowest_version arg for should_use_cudnn.

This comment has been minimized.

@mitmul

mitmul Jul 4, 2017

Member

Thank you for your comment. I fixed the code.

x.cleargrad()
grid.cleargrad()
grid.claergrad()

This comment has been minimized.

@kmatsuura

kmatsuura Jul 26, 2017

Contributor

Typo?

This comment has been minimized.

@mitmul

mitmul Aug 6, 2017

Member

Thanks, I fixed it!

@mitmul mitmul added this to the v3.0.0rc1 milestone Aug 23, 2017

@beam2d
Copy link
Member

beam2d left a comment

LGTM except for taking care of compatibility break.

@@ -115,7 +112,7 @@ def _backward(self, inputs, grad_outputs):
return gtheta,


def spatial_transformer_grid(theta, output_shape, use_cudnn=True):
def spatial_transformer_grid(theta, output_shape):

This comment has been minimized.

@beam2d

beam2d Sep 11, 2017

Member

Could you show an error message when this function is used with use_cudnn flag? See e.g. the implementation of convolution_2d.

@@ -253,7 +248,7 @@ def _backward(self, inputs, grad_outputs):
return gx, ggrid


def spatial_transformer_sampler(x, grid, use_cudnn=True):
def spatial_transformer_sampler(x, grid):

This comment has been minimized.

@beam2d

beam2d Sep 11, 2017

Member

ditto (error message on old usage)

@beam2d

This comment has been minimized.

Copy link
Member

beam2d commented Sep 11, 2017

LGTM

@beam2d beam2d merged commit dc30664 into chainer:master Sep 11, 2017

2 of 3 checks passed

continuous-integration/travis-ci/pr The Travis CI build failed
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
coverage/coveralls First build on remove-use-cudnn-arg at 88.493%
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment