Skip to content

Commit

Permalink
Update on "[quant] Add quantized::leaky_relu that takes scale/zero_po…
Browse files Browse the repository at this point in the history
…int as input"

Summary:
#45593

Previously quantized leaky_relu does not require observation and just inherits
the quantization parameters from input, but that does not work very well in qat
This PR added a quantized::leaky_relu that has observation for output and it will
become the default leaky_relu that our quantization tools produce (eager/graph mode)

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D24067681](https://our.internmc.facebook.com/intern/diff/D24067681)

[ghstack-poisoned]
  • Loading branch information
jerryzh168 committed Oct 5, 2020
1 parent 19a7faf commit c37f812
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions aten/src/ATen/native/quantized/cpu/qrelu.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,9 @@ class QLeakyRelu final {
public:
static Tensor run(Tensor self, Scalar negative_slope, bool inplace, double output_scale, int64_t output_zero_point) {
// inplace argument is ignored now, TODO:support inplace
if (inplace) {
TORCH_WARN("inplace=True is not supported for quantized::leaky_relu yet");
}
const auto qx = self.contiguous(self.suggest_memory_format());
auto qy = at::_empty_affine_quantized(qx.sizes(),
at::device(kCPU).dtype(self.scalar_type()),
Expand Down

0 comments on commit c37f812

Please sign in to comment.