Skip to content

Commit

Permalink
Update on "add qnnpack path for hardtanh"
Browse files Browse the repository at this point in the history
Summary:

Adds a QNNPack path for the clamp kernel, which is useful for
hardtanh.

Test Plan:

python test/test_quantized.py TestQNNPackOps.test_hardtanh

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D20778588](https://our.internmc.facebook.com/intern/diff/D20778588)

[ghstack-poisoned]
  • Loading branch information
vkuzo committed May 4, 2020
1 parent 1bab766 commit a5d4275
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions test/quantization/test_quantized_op.py
Expand Up @@ -463,6 +463,8 @@ def test_qclamp(self, X, min_val, max_val):
min_val=hu.floats(-1e6, 1e6, allow_nan=False, allow_infinity=False),
max_val=hu.floats(-1e6, 1e6, allow_nan=False, allow_infinity=False))
def test_hardtanh(self, X, min_val, max_val):
if 'fbgemm' not in torch.backends.quantized.supported_engines:
return
with override_quantized_engine('fbgemm'):
X, (scale, zero_point, torch_type) = X

Expand Down Expand Up @@ -2839,6 +2841,8 @@ def test_qhardsigmoid(self, X):
min_val=hu.floats(-1e6, -9.999999974752427e-07, allow_nan=False, allow_infinity=False),
max_val=hu.floats(9.999999974752427e-07, 1e6, allow_nan=False, allow_infinity=False))
def test_hardtanh(self, X, min_val, max_val):
if 'qnnpack' not in torch.backends.quantized.supported_engines:
return
with override_quantized_engine('qnnpack'):
X, (scale, zero_point, torch_type) = X

Expand Down

0 comments on commit a5d4275

Please sign in to comment.