Skip to content

hardswish: make it work in static quantization #36545

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

vkuzo
Copy link
Contributor

@vkuzo vkuzo commented Apr 14, 2020

Stack from ghstack:

Summary:

  • adds a quantized nn.module for Hardswish so we can observe activation values
  • modifies the hardswish op to allow specifying scale + zero_point
  • makes hardswish model be properly swapped in static quantization

Test Plan:

added tests and they pass for:

  • the new _out flavor of hardswish
  • QNNPACK changes
  • static quant e2e

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D21045320

Summary:

* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization

Test Plan:

added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@dr-ci
Copy link

dr-ci bot commented Apr 14, 2020

💊 Build failures summary and remediations

As of commit 74a0986 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 8 times.

@vkuzo vkuzo added the oncall: quantization Quantization support in PyTorch label Apr 14, 2020
@vkuzo vkuzo self-assigned this Apr 14, 2020
Summary:

* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization

Test Plan:

added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 14, 2020
Summary:

* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization

Test Plan:

added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: e17b9a0
Pull Request resolved: #36545
Summary:

* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization

Test Plan:

added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization

Test Plan:

added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@facebook-github-bot facebook-github-bot deleted the gh/vkuzo/30/head branch April 19, 2020 14:17
rgommers added a commit to rgommers/pytorch that referenced this pull request Apr 20, 2020
Introduced in pytorchgh-36545, but unclear if that PR was problematic, the
new error messages look similar to already silenced ones about Module:

```
torch/nn/quantized/modules/activation.py:84: error: Name 'torch.nn.Hardswish' is not defined  [name-defined]
torch/nn/qat/modules/activations.py:5: error: Name 'nn.Hardswish' is not defined  [name-defined]
torch/nn/qat/modules/activations.py:17: error: Module has no attribute "Hardswish"  [attr-defined]
torch/quantization/default_mappings.py:18: error: Module has no attribute "Hardswish"  [attr-defined]
torch/quantization/default_mappings.py:49: error: Module has no attribute "Hardswish"  [attr-defined]
torch/quantization/fake_quantize.py:126: error: Module has no attribute "per_tensor_symmetric"  [attr-defined]
torch/quantization/fake_quantize.py:132: error: Module has no attribute "per_channel_symmetric"  [attr-defined]
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: quantization Quantization support in PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants