Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NNVM][CONVOLUTION] Group convolution generalization for NHWC #1232

Merged
merged 4 commits into from
Jun 24, 2018

Conversation

srkreddy1238
Copy link
Contributor

No description provided.

@srkreddy1238
Copy link
Contributor Author

@Huyuwei
Pls review.
This change due to different way of expecting the weights shape for NHWC and NCHW.
I have checked the compute definition of both depthwise and arrived at this.

@srkreddy1238
Copy link
Contributor Author

If we are good with this, I will add a test case for NHWC-depthwise.

if (param.layout == "NHWC") {
wshape[kernel_layout.indexof('I')] *= param.groups;
} else {
wshape[0] *= param.groups;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when param.layout == "NCHW", kernel_layout.indexof('I') = 0?

Copy link
Contributor Author

@srkreddy1238 srkreddy1238 Jun 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think wshape[kernel_layout.indexof('O')] *= param.groups can handle both cases
You can try some tests

Copy link
Contributor Author

@srkreddy1238 srkreddy1238 Jun 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it doesn't work this way as kernel layout for NCHW deviated from the format. (it's actually CNHW passed from frontend and same is expected in compute.).

if (param.layout == "NHWC") {
wshape[kernel_layout.indexof('I')] *= param.groups;
} else {
wshape[0] *= param.groups;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think wshape[kernel_layout.indexof('O')] *= param.groups can handle both cases
You can try some tests

@@ -79,7 +79,15 @@ inline bool Conv2DInferShape(const nnvm::NodeAttrs& attrs,
param.kernel_size[1]});

wshape = ConvertLayout(wshape, kOIHW, kernel_layout);
wshape[0] *= param.groups;

// Depthwise
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's group convolution, and depthwise is just a special case

In fact, only depthwise convolution with multiplier=1 is supported now since it can be expressed as group convolution, see https://github.com/dmlc/tvm/blob/master/nnvm/python/nnvm/top/nn.py#L100-L101

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This generalisation works only if we pass kernel_layout as CNHW (IOHW).
IOHW is agreement between frontend and nchw_deptiwise_compute which infer shape doesn't know.
In my case also multiplier is 1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively we could change frontend to pass kernel_layout as IOHW and generalise it to indexof('I') .

@Huyuwei
Copy link
Contributor

Huyuwei commented Jun 12, 2018

@srkreddy1238 Could you first add a test case that drives this change? It will help me better follow what's going on. You may need to modify these two files to test NHWC layout:
https://github.com/dmlc/tvm/blob/master/nnvm/python/nnvm/top/nn.py#L78
https://github.com/dmlc/tvm/blob/master/nnvm/tests/python/compiler/test_top_level2.py#L61

@srkreddy1238
Copy link
Contributor Author

srkreddy1238 commented Jun 13, 2018

@Huyuwei

We don't have a frontend with NHWC and depthwise now to give a clear test case. I could do this once we have tensorflow frontend.

Ref: https://github.com/srkreddy1238/tvm/tree/mobilenet
Head of this branch has the changes with tensorflow and mobilenet to demonstrate this change.

@srkreddy1238
Copy link
Contributor Author

@Huyuwei

Test case added for NHWC, pls check.

@Huyuwei
Copy link
Contributor

Huyuwei commented Jun 15, 2018

My suggestion is we use wshape[kernel_layout.indexof('O')] *= param.groups to handle all layouts.

NHWC depthwise convolution can be expressed as

depthconv_nhwc = sym.conv2d(data, layout="NHWC", kernel_layout="HWOI")

while a normal grouped NHWC convolution can be expressed as

groupconv_nhwc = sym.conv2d(data, layout="NHWC", kernel_layout="HWIO")

The order of in_channel and out_channel in the kernel layout of topi.conv2d_nchw and topi.conv2d_nhwc is different, but the order of in_channel and multiplier in the kernel layout of topi.depthwise_conv2d_nchw and topi.depthwise_conv2d_nhwc is the same. It's better to deal with this asymmetry in frontend part, not c++ code.

@srkreddy1238 what do you think?

@Huyuwei
Copy link
Contributor

Huyuwei commented Jun 15, 2018

@srkreddy1238 May I ask one question? just for curiosity
Since convolution.cc is basically copied from mxnet source code, is there also a bug in grouped NHWC convolution is mxnet?
https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/convolution.cc#L152

@srkreddy1238
Copy link
Contributor Author

", but the order of in_channel and multiplier in the kernel layout of topi.depthwise_conv2d_nchw and topi.depthwise_conv2d_nhwc is the same"

No, they are not same.

Let me check mxnet compute implementation..

@srkreddy1238
Copy link
Contributor Author

@Huyuwei
Copy link
Contributor

Huyuwei commented Jun 16, 2018

@srkreddy1238 aha, thanks for the pointer.

In the kernel layout of both topi.depthwise_conv2d_nchw and topi.depthwise_conv2d_nhwc is, in_channel is before multiplier, is this right?
https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/depthwise_conv2d.py#L21-L22
https://github.com/dmlc/tvm/blob/master/topi/python/topi/nn/depthwise_conv2d.py#L76-L77

@srkreddy1238
Copy link
Contributor Author

Yes, channel is before multiplier.

@Huyuwei
Copy link
Contributor

Huyuwei commented Jun 17, 2018

@srkreddy1238 For topi.conv2d_nchw, out_channel is before in_channel in kernel layout, while for topi.conv2d_nhwc, out_channel is after in_channel. And there is a mismatch between conv2d_nhwc and depthwise_conv2d_nhwc.

wshape[kernel_layout.indexof('O')] *= param.groups works for general group convolutions. Since group conv is not yet implemented in topi, now we dispatch depthwise conv to support a special case of group conv. To solve the kernel layout mismatch between conv2d_nhwc and depthwise_conv2d_nhwc, we need to specify kernel_layout="HWOI" for depthwise conv, which for general group conv is kernel_layout="HWIO" , as previously commented.

@srkreddy1238
Copy link
Contributor Author

@Huyuwei

Thanks for the info. This makes clear to me.
I will work on modifications based on this.

@srkreddy1238 srkreddy1238 changed the title [NNVM][CONVOLUTION] Generic index for group multiply - bug fix [NNVM][CONVOLUTION] Group comvolution generalization for NHWC Jun 18, 2018
@srkreddy1238 srkreddy1238 changed the title [NNVM][CONVOLUTION] Group comvolution generalization for NHWC [NNVM][CONVOLUTION] Group convolution generalization for NHWC Jun 18, 2018
@srkreddy1238
Copy link
Contributor Author

@Huyuwei
Now works for both NCHW & NHWC. Pls review.

return topi.generic.schedule_depthwise_conv2d_nchw(outs)
elif groups == channels and layout == "NCHW":
return topi.generic.schedule_depthwise_conv2d_nchw(outs)
elif groups == channels and layout == "NHWC":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

elif groups == channels and layout == "NHWC" and kernel_layout == "HWOI":

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -98,9 +98,14 @@ def compute_conv2d(attrs, inputs, _):
if groups == 1:
out = topi.nn.conv2d(inputs[0], kernel, strides, padding, layout)
elif groups == get_const_int(inputs[0].shape[1]) and groups == channels:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add more checks here

elif layout == "NCHW" and groups == get_const_int(inputs[0].shape[1]) and groups == channels:
    out =
elif layout == "NHWC" and kernel_layout == "HWOI" and groups == get_const_int(inputs[0].shape[3]) and groups == channels:
    out =

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

elif groups == get_const_int(inputs[0].shape[1]) and groups == channels:
elif layout == "NCHW" and \
groups == get_const_int(inputs[0].shape[1]) and \
groups == channels:
# NCHW
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then this comment can be removed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Handled

elif layout == "NHWC" and \
kernel_layout == "HWOI" and \
groups == get_const_int(inputs[0].shape[3]) and \
groups == channels:
# NHWC
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and this line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handled.

@Huyuwei
Copy link
Contributor

Huyuwei commented Jun 19, 2018

@srkreddy1238 Can mobilenet be converted now? If so, add it to frontend tests.

@srkreddy1238
Copy link
Contributor Author

Ok. I will update frontend testcases with mobilenet this weekend.

@tqchen tqchen merged commit 1e66d3c into apache:master Jun 24, 2018
@tqchen
Copy link
Member

tqchen commented Jun 24, 2018

This is merged as @Huyuwei approved the change. Please add mobilnet test in a separate PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants