Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the meaning of conv{1,2}d_transpose output_padding parameter. #5758

Merged
merged 28 commits into from
Jun 30, 2020

Conversation

abergeron
Copy link
Contributor

@abergeron abergeron commented Jun 9, 2020

I finally had time to work on it again so this is a re-attempt at #4318, which was reverted.


For some context the tvm meaning for output_padding is to pad the output shape with zeros.

For every other deep learning framework, this parameter name is used to disambiguate between the possible output shapes in the presence of padding (Unless they have an explicit output_shape parameter like tensorflow, which doesn't have output_padding).

The current behaviour is in my mind redundant since you can always zero-pad manually if you need to. Also there is no way to compute the gradient of a convolution with strides that did not have the minimum possible input shape.

The new behaviour makes this possible and brings us in-line with the other frameworks.


I've rebased the code and added checks to make sure an invalid output_padding is refused.

As for the comments by @icemelon9:

  1. The aformentioned check should cover that case and prevent going out of bounds.
  2. Since the preprocess function handles everything that we need output_padding for, we don't need to pass it forward.

My best guess for why the CI didn't catch the cases is that going just a little bit out of bounds (like off by one) is unlikely to trigger any memory problems.

@abergeron
Copy link
Contributor Author

The only thing probably missing is an update of tophub for the missing output_padding arguments.

@abergeron
Copy link
Contributor Author

I've confirmed manually that with the tophub changes the vta tests pass.

@vinx13
Copy link
Member

vinx13 commented Jun 14, 2020

The tophub PR has been merged, please restart CI to try again

@vinx13
Copy link
Member

vinx13 commented Jun 20, 2020

ping @abergeron

@abergeron
Copy link
Contributor Author

Sorry about the delay, I was distracted by another project.

@abergeron
Copy link
Contributor Author

I saw that @siju-samuel added the conv3d_transpose op but with the wrong definition for output_padding.

I would rather make a follow-up PR to fix that since I want this one to be merged if it's ok and I don't have a lot of free time currently.

I will make a follow-up PR to fix the definition of output_padding in conv3d_transpose later if nobody else does it before me.

@siju-samuel
Copy link
Member

@abergeron Thanks for bringing it up and the PR. I followed the existing conv2d_transpose implementation. You can add another PR to fix for fixing this issue.

@vinx13 vinx13 merged commit bc22fb9 into apache:master Jun 30, 2020
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Jun 30, 2020
…ache#5758)

* Add output_padding to generic

* Add output_padding to the reference impl

* Add output_padding to arm_cpu

* Add output_padding to the test

* Add output_padding for cuda

* Add output_padding for x86

* Make use of the new output_padding argument in Relay

* Adjust conv2d_transpose Relay test

* Fix lint errors

* Fix the VTA declaration of conv2d_transpose

* support for output padding in conv2d transpose

* some output padding will break IR pass

* Fix new conv2d_transpose test

* Update tophub

* Fix conv1d output_padding too.

* Fix the conv1d_transpose reference function.

* Fix the cuda impl

* fix the topi test for conv1d

* format

* Add tests for conv1d_transpose output_padding and some check that the values are valid.

* Add check in the implementations

* Add checks to the implementations of conv2d

* Make use of the output_padding argument from topi in relay.

* Fix relay tests asking for invalid output_padding

* Fix line length

* Fix vta tests

* Update tophub references

* Trigger CI

Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Jul 2, 2020
…ache#5758)

* Add output_padding to generic

* Add output_padding to the reference impl

* Add output_padding to arm_cpu

* Add output_padding to the test

* Add output_padding for cuda

* Add output_padding for x86

* Make use of the new output_padding argument in Relay

* Adjust conv2d_transpose Relay test

* Fix lint errors

* Fix the VTA declaration of conv2d_transpose

* support for output padding in conv2d transpose

* some output padding will break IR pass

* Fix new conv2d_transpose test

* Update tophub

* Fix conv1d output_padding too.

* Fix the conv1d_transpose reference function.

* Fix the cuda impl

* fix the topi test for conv1d

* format

* Add tests for conv1d_transpose output_padding and some check that the values are valid.

* Add check in the implementations

* Add checks to the implementations of conv2d

* Make use of the output_padding argument from topi in relay.

* Fix relay tests asking for invalid output_padding

* Fix line length

* Fix vta tests

* Update tophub references

* Trigger CI

Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants