Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Frontend][Tensorflow]Add conv2d_transpose #4300

Merged
merged 3 commits into from
Nov 18, 2019

Conversation

optima2005
Copy link
Contributor

Hi, This PR is to fix #4263

After investigation, I found this operation has been implemented in relay cc codes and been working in keras frontend. For tensorflow frontend, only python part is missing. So this PR only to fix the python part.

The conv2d_transpose implementation is similar with the conv2d op, beside the following differences,

  1. Its name in tensorflow graph definition is 'Conv2DBackpropInput'
  2. Its inputs order: 0 -- "conv2d_transpose/input_sizes", 1 -- weight, 2 -- input_placeholder
  3. It needs an extra input parameter "output_shape"

The PR implementation is reusing _conv() function and adding the conditions to handle the below differences.

@yzhliu
Copy link
Member

yzhliu commented Nov 11, 2019

@yongwww @srkreddy1238 could you help to review?

@@ -349,6 +361,14 @@ def test_forward_convolution():
_test_convolution('depthwise', [4, 124, 17, 17], [1, 1, 124, 1], [1, 1], [1, 1], 'SAME', 'NCHW')
_test_convolution('depthwise', [4, 12, 17, 17], [3, 3, 12, 1], [1, 1], [2, 2], 'VALID', 'NCHW')
_test_convolution('depthwise', [4, 12, 17, 17], [3, 3, 12, 2], [1, 1], [2, 2], 'VALID', 'NCHW')
_test_convolution('conv_transpose', [4, 32, 8, 8], [1, 1, 176, 32], [1, 1], [1, 1], 'SAME',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add test case for cpu

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NHWC layout is not supported, see nn.conv2d_transpose. Is it still needed to add test cases for cpu?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, test cases for cpu are needed. How aboud adding test cases for NCHW input, and adding layout transformation to get the expected NCHW layout for NHWC input?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean adding the transformation from NHWC to NCHW if 'conv2d_transpose' op at the beginning of _conv ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, if the layout is NCHW, the nn.conv2d_transpose should work well. If layout is NHWC, then we can consider transforming the layout to NCHW to be compatible with nn.conv2d_transpose

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added transformation and the cpu test cases, could you please review again? Thanks!

@optima2005
Copy link
Contributor Author

The check failure is not due to the changes in this PR. And I can run through the failed tests in my local environment. So I 'd like to re-start the check, but don't know how to do that. Can anyone advice? Thanks!

@yongwww
Copy link
Member

yongwww commented Nov 14, 2019

The check failure is not due to the changes in this PR. And I can run through the failed tests in my local environment. So I 'd like to re-start the check, but don't know how to do that. Can anyone advice? Thanks!

Your test case failed due to "TypeError: conv2d_transpose() got an unexpected keyword argument 'dilations'", try to use different tf version (1.2, 1.3, or 1.4) to reproduce it.

@optima2005
Copy link
Contributor Author

tf1.4 doesn't have this error while tf1.3 does.

Copy link
Member

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@optima2005
Copy link
Contributor Author

@srkreddy1238 @yzhliu any further comments? Thanks!

@yzhliu yzhliu merged commit 2baf310 into apache:master Nov 18, 2019
@yzhliu
Copy link
Member

yzhliu commented Nov 18, 2019

Thanks @optima2005 @yongwww

zxy844288792 pushed a commit to zxy844288792/tvm that referenced this pull request Nov 26, 2019
* [Relay][Frontend][Tensorflow]Add conv2d_transpose

* add transformation from NHWC to NCHW to compatible with TVM conv2d_transpose implementation

* remove 'dilations' paramater to compitable with TF1.3
yongwww pushed a commit to neo-ai/tvm that referenced this pull request Nov 26, 2019
* [Relay][Frontend][Tensorflow]Add conv2d_transpose

* add transformation from NHWC to NCHW to compatible with TVM conv2d_transpose implementation

* remove 'dilations' paramater to compitable with TF1.3
@apivovarov
Copy link
Contributor

apivovarov commented Nov 27, 2019

Why we only support kernel (1,1) for TF conv2d_transpose with padding SAME?
https://discuss.tvm.ai/t/why-we-only-support-kernel-1-1-for-tf-conv2d-transpose-same/4957

@yongwww
Copy link
Member

yongwww commented Nov 27, 2019

@optima2005

@optima2005
Copy link
Contributor Author

@yongwww I am wondering whether there might be some issue in padding for 'SAME'. This PR seems not match the padding in topi.nn.conv2d_transpose . I need a little more time to invectigate this and will get back a additional PR if any issue would be needed to fix this.

@apivovarov
Copy link
Contributor

apivovarov commented Dec 2, 2019

In any case please add _test_convolution for 3x3 kernel and SAME padding.
Looks like you changed kernel to 1x1 in order for SAME padding tests to pass.

For now I'd open PR to make kernel size check and show users message saying that "Currently we support SAME padding for kernel with size 1x1 only"
@yongwww @optima2005
Example: https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/tflite.py#L1440

@optima2005 optima2005 deleted the add_conv2d_transpose_#4263 branch December 4, 2019 07:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FRONTEND][Tensorflow] conv2d_transpose op of Tensorflow
4 participants