New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convolution too slow for depthwise when input channels != output channels #347
Comments
This is not technically depth-wise convolution because we have 2 output channels per group. We should see if this case should be handled by generalizing the depth-wise kernel or the group conv kernel. |
PyTorch defines it as depthwise :-| https://pytorch.org/docs/stable/nn.html#conv2d
|
I see. Let me see if we can generalize group conv kernel (first generalize it to support C_per_G != K_per_G and then C_per_G == 1 and K_per_G == 2). |
Would it be easier to add this to depthwise? |
Also if we end up modifying groupwise, we should go for K_per_G == N |
Yes just looked at groupwise again. I agree it will be easier to add this to depthwise. |
Summary: Pull Request resolved: pytorch#359 To fix pytorch#347 For review, the core change is in GenerateI8Depthwise.cc . The other changes are mostly updating the interface and tests. Reviewed By: dskhudia Differential Revision: D20984303 fbshipit-source-id: 9e28f8957bb325490f43120cc381d5b014dde6be
Summary: Pull Request resolved: pytorch#359 To fix pytorch#347 For review, the core change is in GenerateI8Depthwise.cc . The other changes are mostly updating the interface and tests. Reviewed By: dskhudia Differential Revision: D20984303 fbshipit-source-id: 58c016ce33ca5e10051fd33d5b714c01e8d15e76
This is the root cause of https://discuss.pytorch.org/t/got-slow-speed-on-quantized-model-with-fbgemm-on-x86/74439
fbgemmConv takes the im2col route for such cases and it's too slow. Here are the results for a shape I benchmarked.
"N", "IC", "OC", "H", "W", "G", "kernel", "stride", "pad"
1, 128, 256, 32, 100, 128, 3, 1, 1
FP32 time: 3.55 ms
fbgemmConv time: 86.76ms
cc @jspark1105
The text was updated successfully, but these errors were encountered: