Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[quant] Add quantized Sigmoid module #45883

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 3 additions & 0 deletions test/quantization/test_quantized_module.py
Expand Up @@ -715,6 +715,9 @@ def test_elu(self):
def test_leaky_relu(self):
self._test_activation_module_impl("LeakyReLU", nn.LeakyReLU, nnq.LeakyReLU, {"negative_slope": 0.2})

def test_sigmoid(self):
self._test_activation_module_impl("Sigmoid", nn.Sigmoid, nnq.Sigmoid, {})

@given(
num_embeddings=st.integers(10, 50),
embedding_dim=st.integers(5, 50).filter(lambda x: x % 4 == 0),
Expand Down
3 changes: 2 additions & 1 deletion torch/nn/quantized/modules/__init__.py
Expand Up @@ -2,7 +2,7 @@
import torch
from torch.nn.modules.pooling import MaxPool2d

from .activation import ReLU, ReLU6, Hardswish, ELU, LeakyReLU
from .activation import ReLU, ReLU6, Hardswish, ELU, LeakyReLU, Sigmoid
from .batchnorm import BatchNorm2d, BatchNorm3d
from .normalization import LayerNorm, GroupNorm, InstanceNorm1d, \
InstanceNorm2d, InstanceNorm3d
Expand Down Expand Up @@ -100,6 +100,7 @@ def from_float(mod):
'Hardswish',
'ELU',
'LeakyReLU',
'Sigmoid',
'LayerNorm',
'GroupNorm',
'InstanceNorm1d',
Expand Down
21 changes: 21 additions & 0 deletions torch/nn/quantized/modules/activation.py
Expand Up @@ -149,3 +149,24 @@ def _get_name(self):
def from_float(cls, mod):
scale, zero_point = mod.activation_post_process.calculate_qparams()
return cls(float(scale), int(zero_point), mod.negative_slope, mod.inplace)

class Sigmoid(torch.nn.Sigmoid):
r"""This is the quantized equivalent of :class:`~torch.nn.LeakyReLU`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs are off here, can we refer to sigmoid instead of leakyRelu?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh sorry, will fix in next PR


Args:
scale: quantization scale of the output tensor
zero_point: quantization zero point of the output tensor
"""

def __init__(self, output_scale: float, output_zero_point: int):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have default args? It is safe to assume those for this function

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure, it's not likely that user is going to construct quantized model from quantized modules from scratch anyways, what is the benefit of having default args?

super().__init__()
self.output_scale = output_scale
self.output_zero_point = output_zero_point

def forward(self, input):
return torch.ops.quantized.sigmoid(input, self.output_scale, self.output_zero_point)

@classmethod
def from_float(cls, mod):
output_scale, output_zero_point = mod.activation_post_process.calculate_qparams()
return cls(float(output_scale), int(output_zero_point))