Skip to content

Commit

Permalink
[docs][ao] Add missing documentation for torch.quantized_batch_norm (#…
Browse files Browse the repository at this point in the history
…63240)

Summary:
Pull Request resolved: #63240

Op is exposed via torch.quantized_batch_norm to the end user without any existing documentation

Test Plan:
CI

Imported from OSS

Reviewed By: VitalyFedyunin

Differential Revision: D30316431

fbshipit-source-id: bf2dc8b7b6f497cf73528eaa2bedef9f65029d84
  • Loading branch information
supriyar authored and alanwaketan committed Aug 17, 2021
1 parent 05a8f72 commit 9ef85fd
Show file tree
Hide file tree
Showing 2 changed files with 45 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/source/torch.rst
Expand Up @@ -352,6 +352,7 @@ Pointwise Ops
polygamma
positive
pow
quantized_batch_norm
rad2deg
real
reciprocal
Expand Down
44 changes: 44 additions & 0 deletions torch/_torch_docs.py
Expand Up @@ -11228,6 +11228,50 @@ def merge_dicts(*dicts):
[100, 200]], dtype=torch.uint8)
""")


add_docstr(torch.quantized_batch_norm,
r"""
quantized_batch_norm(input, weight=None, bias=None, mean, var, eps, output_scale, output_zero_point) -> Tensor
Applies batch normalization on a 4D (NCHW) quantized tensor.
.. math::
y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
Arguments:
input (Tensor): quantized tensor
weight (Tensor): float tensor that corresponds to the gamma, size C
bias (Tensor): float tensor that corresponds to the beta, size C
mean (Tensor): float mean value in batch normalization, size C
var (Tensor): float tensor for variance, size C
eps (float): a value added to the denominator for numerical stability.
output_scale (float): output quantized tensor scale
output_zero_point (int): output quantized tensor zero_point
Returns:
Tensor: A quantized tensor with batch normalization applied.
Example::
>>> qx = torch.quantize_per_tensor(torch.rand(2, 2, 2, 2), 1.5, 3, torch.quint8)
>>> torch.quantized_batch_norm(qx, torch.ones(2), torch.zeros(2), torch.rand(2), torch.rand(2), 0.00001, 0.2, 2)
tensor([[[[-0.2000, -0.2000],
[ 1.6000, -0.2000]],
[[-0.4000, -0.4000],
[-0.4000, 0.6000]]],
[[[-0.2000, -0.2000],
[-0.2000, -0.2000]],
[[ 0.6000, -0.4000],
[ 0.6000, -0.4000]]]], size=(2, 2, 2, 2), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.2, zero_point=2)
""")


add_docstr(torch.Generator,
r"""
Generator(device='cpu') -> Generator
Expand Down

0 comments on commit 9ef85fd

Please sign in to comment.