-
Notifications
You must be signed in to change notification settings - Fork 25.3k
[PyTorch] Fix quantized Conv1d module parameters #62356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit f860322 (more details on the Dr. CI page):
1 job timed out:
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
This pull request was exported from Phabricator. Differential Revision: D29957556 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D29957556 |
08540c6
to
16196bc
Compare
This pull request was exported from Phabricator. Differential Revision: D29957556 |
16196bc
to
045873d
Compare
This pull request was exported from Phabricator. Differential Revision: D29957556 |
045873d
to
756a069
Compare
Summary: Pull Request resolved: pytorch#62356 In `torch/nn/quantized/module/conv.py`, Conv1d is making a scaler `kernel_size` into a tuple with size 2 by repeating `kernel_size` value. This logic is breaking `Conv1d` because internally it's unsqueezing the input with shape N, C, L to N, C, 1, L in [`qconv.cpp`](https://github.com/pytorch/pytorch/blob/06dfaadfc6357ed909ed15c7ef79d503c49d9475/aten/src/ATen/native/quantized/cpu/qconv.cpp#L841). Applying aforementioned kernel to this input shape will produce negative output shape in [`ConvUtils.h`](https://github.com/pytorch/FBGEMM/blob/203f7ff6e07d62b042e7d755fd1f4789d978e4d1/include/fbgemm/ConvUtils.h#L118-L119), if kernel_size > 1. Here I'm modifying the processing logic for `kernel_size` and a few other parameters, to follow the pattern of [`torch/nn/module/conv.py`](https://github.com/pytorch/pytorch/blob/aae2a3c95ee6d62e834a5e6890a12f7ecf0dd17f/torch/nn/modules/conv.py#L284-L287). Test Plan: Rely on unit test Reviewed By: kimishpatel Differential Revision: D29957556 fbshipit-source-id: be33a3ce516788125f750274b527d27a57ff8a8e
This pull request was exported from Phabricator. Differential Revision: D29957556 |
756a069
to
f860322
Compare
This pull request has been merged in 70f57bc. |
Summary:
In
torch/nn/quantized/module/conv.py
, Conv1d is making a scalerkernel_size
into a tuple with size 2 by repeatingkernel_size
value. This logic is breakingConv1d
because internally it's unsqueezing the input with shape N, C, L to N, C, 1, L inqconv.cpp
. Applying aforementioned kernel to this input shape will produce negative output shape inConvUtils.h
, if kernel_size > 1.Here I'm modifying the processing logic for
kernel_size
and a few other parameters, to follow the pattern oftorch/nn/module/conv.py
.Test Plan: Rely on unit test
Differential Revision: D29957556