New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[quantization]: Add support for quantization aware training for Leaky Relu and Sigmoid #45593
Labels
oncall: quantization
Quantization support in PyTorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
Comments
I think for sigmoid we don't need a qat module, it should be supported in the workflow, I'm have WIP PRs starting from #45538 |
jerryzh168
added a commit
that referenced
this issue
Oct 1, 2020
Summary: #45593 Previously quantized leaky_relu does not require observation and just inherits the quantization parameters from input, but that does not work very well in qat This PR added a quantized::leaky_relu that has observation for output and it will become the default leaky_relu that our quantization tools produce (eager/graph mode) Test Plan: Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
jerryzh168
added a commit
that referenced
this issue
Oct 2, 2020
…int as input" Summary: #45593 Previously quantized leaky_relu does not require observation and just inherits the quantization parameters from input, but that does not work very well in qat This PR added a quantized::leaky_relu that has observation for output and it will become the default leaky_relu that our quantization tools produce (eager/graph mode) Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D24067681](https://our.internmc.facebook.com/intern/diff/D24067681) [ghstack-poisoned]
jerryzh168
added a commit
that referenced
this issue
Oct 2, 2020
…int as input" Summary: #45593 Previously quantized leaky_relu does not require observation and just inherits the quantization parameters from input, but that does not work very well in qat This PR added a quantized::leaky_relu that has observation for output and it will become the default leaky_relu that our quantization tools produce (eager/graph mode) Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D24067681](https://our.internmc.facebook.com/intern/diff/D24067681) [ghstack-poisoned]
jerryzh168
added a commit
that referenced
this issue
Oct 2, 2020
…int as input" Summary: #45593 Previously quantized leaky_relu does not require observation and just inherits the quantization parameters from input, but that does not work very well in qat This PR added a quantized::leaky_relu that has observation for output and it will become the default leaky_relu that our quantization tools produce (eager/graph mode) Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D24067681](https://our.internmc.facebook.com/intern/diff/D24067681) [ghstack-poisoned]
jerryzh168
added a commit
that referenced
this issue
Oct 5, 2020
…int as input" Summary: #45593 Previously quantized leaky_relu does not require observation and just inherits the quantization parameters from input, but that does not work very well in qat This PR added a quantized::leaky_relu that has observation for output and it will become the default leaky_relu that our quantization tools produce (eager/graph mode) Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D24067681](https://our.internmc.facebook.com/intern/diff/D24067681) [ghstack-poisoned]
jerryzh168
added a commit
that referenced
this issue
Oct 6, 2020
…int as input" Summary: #45593 Previously quantized leaky_relu does not require observation and just inherits the quantization parameters from input, but that does not work very well in qat This PR added a quantized::leaky_relu that has observation for output and it will become the default leaky_relu that our quantization tools produce (eager/graph mode) Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D24067681](https://our.internmc.facebook.com/intern/diff/D24067681) [ghstack-poisoned]
facebook-github-bot
pushed a commit
that referenced
this issue
Oct 6, 2020
#45702) Summary: Pull Request resolved: #45702 #45593 Previously quantized leaky_relu does not require observation and just inherits the quantization parameters from input, but that does not work very well in qat This PR added a quantized::leaky_relu that has observation for output and it will become the default leaky_relu that our quantization tools produce (eager/graph mode) Test Plan: Imported from OSS Reviewed By: raghuramank100 Differential Revision: D24067681 fbshipit-source-id: d216738344363794b82bd3d75c8587a4b9415bca
Can we close this issue now? |
raghuramank100
added
the
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
label
Mar 8, 2021
I think so |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
oncall: quantization
Quantization support in PyTorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Currently, pytorch supports quantized implementations of leaky relu and sigmoid. For the case of sigmoid, the outputs of the quantized module are quantized over the range [0,1]. For the leaky_relu, the scale and zero-point of the input is used to define the range for the output. This is not modeled during quantization aware training.
A possible solution is to define:
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo
The text was updated successfully, but these errors were encountered: