Skip to content

Conversation

larryliu0820
Copy link
Contributor

Summary:
In torch/nn/quantized/module/conv.py, Conv1d is making a scaler kernel_size into a tuple with size 2 by repeating kernel_size value. This logic is breaking Conv1d because internally it's unsqueezing the input with shape N, C, L to N, C, 1, L in qconv.cpp. Applying aforementioned kernel to this input shape will produce negative output shape in ConvUtils.h, if kernel_size > 1.

Here I'm modifying the processing logic for kernel_size and a few other parameters, to follow the pattern of torch/nn/module/conv.py.

Test Plan: Rely on unit test

Differential Revision: D29957556

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 28, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit f860322 (more details on the Dr. CI page):


  • 1/2 failures possibly* introduced in this PR
    • 1/1 non-scanned failure(s)
  • 1/2 broken upstream at merge base eac288e on Jul 30 from 8:12am to 12:37pm

1 job timed out:

  • pytorch_linux_xenial_py3_clang5_asan_test1

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29957556

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29957556

@jbschlosser jbschlosser requested review from vkuzo and z-a-f and removed request for jbschlosser July 28, 2021 22:02
@albanD albanD removed their request for review July 29, 2021 18:02
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29957556

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29957556

Summary:
Pull Request resolved: pytorch#62356

In `torch/nn/quantized/module/conv.py`, Conv1d is making a scaler `kernel_size` into a tuple with size 2 by repeating `kernel_size` value. This logic is breaking `Conv1d` because internally it's unsqueezing the input with shape N, C, L to N, C, 1, L in [`qconv.cpp`](https://github.com/pytorch/pytorch/blob/06dfaadfc6357ed909ed15c7ef79d503c49d9475/aten/src/ATen/native/quantized/cpu/qconv.cpp#L841). Applying aforementioned kernel to this input shape will produce negative output shape in [`ConvUtils.h`](https://github.com/pytorch/FBGEMM/blob/203f7ff6e07d62b042e7d755fd1f4789d978e4d1/include/fbgemm/ConvUtils.h#L118-L119), if kernel_size > 1.

Here I'm modifying the processing logic for `kernel_size` and a few other parameters, to follow the pattern of [`torch/nn/module/conv.py`](https://github.com/pytorch/pytorch/blob/aae2a3c95ee6d62e834a5e6890a12f7ecf0dd17f/torch/nn/modules/conv.py#L284-L287).

Test Plan: Rely on unit test

Reviewed By: kimishpatel

Differential Revision: D29957556

fbshipit-source-id: be33a3ce516788125f750274b527d27a57ff8a8e
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29957556

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 70f57bc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants