Skip to content

Commit

Permalink
quant docs: add and clean up GroupNorm
Browse files Browse the repository at this point in the history
Cleans up the quantized GroupNorm docstring and adds it to quantization docs.

Test plan:
* build on Mac OS and inspect

ghstack-source-id: 4347125fe86d10d4f59f093dacc7b32cb1d2c6ea
Pull Request resolved: #40343
  • Loading branch information
vkuzo committed Jun 20, 2020
1 parent 58143d9 commit ad62f2e
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 3 deletions.
12 changes: 12 additions & 0 deletions docs/source/quantization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,7 @@ Layers for the quantization-aware training
* :class:`~torch.nn.qat.Conv2d` — 2D convolution
* :class:`~torch.nn.qat.Hardswish` — Hardswish
* :class:`~torch.nn.qat.LayerNorm` — LayerNorm
* :class:`~torch.nn.qat.GroupNorm` — GroupNorm

``torch.quantization``
~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -354,6 +355,7 @@ Quantized version of standard NN layers.
quantized representation of 6
* :class:`~torch.nn.quantized.Hardswish` — Hardswish
* :class:`~torch.nn.quantized.LayerNorm` — LayerNorm. *Note: performance on ARM is not optimized*.
* :class:`~torch.nn.quantized.GroupNorm` — GroupNorm. *Note: performance on ARM is not optimized*.

``torch.nn.quantized.dynamic``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -714,6 +716,11 @@ LayerNorm
.. autoclass:: LayerNorm
:members:

GroupNorm
~~~~~~~~~~~~~~~
.. autoclass:: GroupNorm
:members:


torch.nn.quantized
----------------------------
Expand Down Expand Up @@ -802,6 +809,11 @@ LayerNorm
.. autoclass:: LayerNorm
:members:

GroupNorm
~~~~~~~~~~~~~~~
.. autoclass:: GroupNorm
:members:

torch.nn.quantized.dynamic
----------------------------

Expand Down
4 changes: 2 additions & 2 deletions torch/nn/qat/modules/normalization.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@

class GroupNorm(nn.GroupNorm):
r"""
A GroupNorm module attached with FakeQuantize modules for both output
activation and weight, used for quantization aware training.
A GroupNorm module attached with FakeQuantize modules for output
activation, used for quantization aware training.
Similar to `torch.nn.GroupNorm`, with FakeQuantize modules initialized to
default.
Expand Down
7 changes: 6 additions & 1 deletion torch/nn/quantized/modules/normalization.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,12 @@ def from_float(cls, mod):
return new_mod

class GroupNorm(torch.nn.GroupNorm):
r"""This is the quantized version of `torch.nn.GroupNorm`.
r"""This is the quantized version of :class:`~torch.nn.GroupNorm`.
Additional args:
* **scale** - quantization scale of the output, type: double.
* **zero_point** - quantization zero point of the output, type: long.
"""
__constants__ = ['num_groups', 'num_channels', 'eps', 'affine']

Expand Down

0 comments on commit ad62f2e

Please sign in to comment.