Skip to content

Commit

Permalink
DOC: split quantization.rst into smaller pieces (pytorch#41321)
Browse files Browse the repository at this point in the history
Summary:
xref pytorchgh-38010 and pytorchgh-38011.

After this PR, there should be only two warnings:
```
pytorch/docs/source/index.rst:65: WARNING: toctree contains reference to nonexisting \
      document 'torchvision/index'
WARNING: autodoc: failed to import class 'tensorboard.writer.SummaryWriter' from module \
     'torch.utils'; the following exception was raised:
No module named 'tensorboard'
```

If tensorboard and torchvision are prerequisites to building docs, they should be added to the `requirements.txt`.

As for breaking up quantization into smaller pieces: I split out the list of supported operations and the list of modules to separate documents. I think this makes the page flow better, makes it much "lighter" in terms of page cost, and also removes some warnings since the same class names appear in multiple sub-modules.

Pull Request resolved: pytorch#41321

Reviewed By: ngimel

Differential Revision: D22753099

Pulled By: mruberry

fbshipit-source-id: d504787fcf1104a0b6e3d1c12747ec53450841da
  • Loading branch information
mattip authored and facebook-github-bot committed Jul 26, 2020
1 parent 6af6596 commit b7bda23
Show file tree
Hide file tree
Showing 15 changed files with 731 additions and 671 deletions.
4 changes: 2 additions & 2 deletions docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -38,5 +38,5 @@ html-stable:
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

clean:
@echo "Removing everything under 'build'.."
@rm -rf $(BUILDDIR)/html/ $(BUILDDIR)/doctrees
@echo "Removing everything under 'build' and 'source/generated'.."
@rm -rf $(BUILDDIR)/html/ $(BUILDDIR)/doctrees $(SOURCEDIR)/generated
5 changes: 4 additions & 1 deletion docs/source/jit.rst
Original file line number Diff line number Diff line change
Expand Up @@ -236,6 +236,7 @@ and we will be able to step into the :func:`@torch.jit.script
TorchScript compiler for a specific function, see
:func:`@torch.jit.ignore <torch.jit.ignore>`.

.. _inspecting-code:

Inspecting Code
~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -287,6 +288,8 @@ You can use this to ensure TorchScript (tracing or scripting) has captured
your model code correctly.


.. _interpreting-graphs:

Interpreting Graphs
~~~~~~~~~~~~~~~~~~~
TorchScript also has a representation at a lower level than the code pretty-
Expand Down Expand Up @@ -317,7 +320,7 @@ including control flow operators for loops and conditionals. As an example:

...

``graph`` follows the same rules described in the `inspecting-code` section
``graph`` follows the same rules described in the :ref:`inspecting-code` section
with regard to ``forward`` method lookup.

The example script above produces the graph::
Expand Down
328 changes: 328 additions & 0 deletions docs/source/quantization-support.rst

Large diffs are not rendered by default.

703 changes: 50 additions & 653 deletions docs/source/quantization.rst

Large diffs are not rendered by default.

29 changes: 29 additions & 0 deletions docs/source/torch.nn.intrinsic.qat.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
torch.nn.intrinsic.qat
--------------------------------

This module implements the versions of those fused operations needed for
quantization aware training.

.. automodule:: torch.nn.intrinsic.qat

ConvBn2d
~~~~~~~~~~~~~~~
.. autoclass:: ConvBn2d
:members:

ConvBnReLU2d
~~~~~~~~~~~~~~~
.. autoclass:: ConvBnReLU2d
:members:

ConvReLU2d
~~~~~~~~~~~~~~~
.. autoclass:: ConvReLU2d
:members:

LinearReLU
~~~~~~~~~~~~~~~
.. autoclass:: LinearReLU
:members:


23 changes: 23 additions & 0 deletions docs/source/torch.nn.intrinsic.quantized.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
torch.nn.intrinsic.quantized
--------------------------------------

This module implements the quantized implementations of fused operations like conv + relu.

.. automodule:: torch.nn.intrinsic.quantized

ConvReLU2d
~~~~~~~~~~~~~~~
.. autoclass:: ConvReLU2d
:members:

ConvReLU3d
~~~~~~~~~~~~~~~
.. autoclass:: ConvReLU3d
:members:

LinearReLU
~~~~~~~~~~~~~~~
.. autoclass:: LinearReLU
:members:


40 changes: 40 additions & 0 deletions docs/source/torch.nn.intrinsic.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
.. _torch_nn_intrinsic:

torch.nn.intrinsic
--------------------------------

This module implements the combined (fused) modules conv + relu which can be
then quantized.

.. automodule:: torch.nn.intrinsic

ConvBn1d
~~~~~~~~~~~~~~~
.. autoclass:: ConvBn1d
:members:

ConvBn2d
~~~~~~~~~~~~~~~
.. autoclass:: ConvBn2d
:members:

ConvBnReLU1d
~~~~~~~~~~~~~~~
.. autoclass:: ConvBnReLU1d
:members:

ConvBnReLU2d
~~~~~~~~~~~~~~~
.. autoclass:: ConvBnReLU2d
:members:

ConvReLU1d
~~~~~~~~~~~~~~~
.. autoclass:: ConvReLU1d
:members:

ConvReLU2d
~~~~~~~~~~~~~~~
.. autoclass:: ConvReLU2d
:members:

21 changes: 21 additions & 0 deletions docs/source/torch.nn.qat.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
torch.nn.qat
---------------------------

This module implements versions of the key nn modules **Conv2d()** and
**Linear()** which run in FP32 but with rounding applied to simulate the effect
of INT8 quantization.

.. automodule:: torch.nn.qat

Conv2d
~~~~~~~~~~~~~~~
.. autoclass:: Conv2d
:members:

Linear
~~~~~~~~~~~~~~~
.. autoclass:: Linear
:members:



29 changes: 29 additions & 0 deletions docs/source/torch.nn.quantized.dynamic.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
torch.nn.quantized.dynamic
--------------------------

.. automodule:: torch.nn.quantized.dynamic

Linear
~~~~~~~~~~~~~~~
.. autoclass:: Linear
:members:

LSTM
~~~~~~~~~~~~~~~
.. autoclass:: LSTM
:members:

LSTMCell
~~~~~~~~~~~~~~~
.. autoclass:: LSTMCell
:members:

GRUCell
~~~~~~~~~~~~~~~
.. autoclass:: GRUCell
:members:

RNNCell
~~~~~~~~~~~~~~~
.. autoclass:: RNNCell
:members:
123 changes: 123 additions & 0 deletions docs/source/torch.nn.quantized.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
torch.nn.quantized
------------------

This module implements the quantized versions of the nn layers such as
~`torch.nn.Conv2d` and `torch.nn.ReLU`.

Functional interface
~~~~~~~~~~~~~~~~~~~~
.. automodule:: torch.nn.quantized.functional

.. autofunction:: relu
.. autofunction:: linear
.. autofunction:: conv1d
.. autofunction:: conv2d
.. autofunction:: conv3d
.. autofunction:: max_pool2d
.. autofunction:: adaptive_avg_pool2d
.. autofunction:: avg_pool2d
.. autofunction:: interpolate
.. autofunction:: hardswish
.. autofunction:: upsample
.. autofunction:: upsample_bilinear
.. autofunction:: upsample_nearest


.. automodule:: torch.nn.quantized

ReLU
~~~~~~~~~~~~~~~
.. autoclass:: ReLU
:members:

ReLU6
~~~~~~~~~~~~~~~
.. autoclass:: ReLU6
:members:

ELU
~~~~~~~~~~~~~~~
.. autoclass:: ELU
:members:

Hardswish
~~~~~~~~~~~~~~~
.. autoclass:: Hardswish
:members:

Conv1d
~~~~~~~~~~~~~~~
.. autoclass:: Conv1d
:members:

Conv2d
~~~~~~~~~~~~~~~
.. autoclass:: Conv2d
:members:

Conv3d
~~~~~~~~~~~~~~~
.. autoclass:: Conv3d
:members:

FloatFunctional
~~~~~~~~~~~~~~~
.. autoclass:: FloatFunctional
:members:

QFunctional
~~~~~~~~~~~~~~~
.. autoclass:: QFunctional
:members:

Quantize
~~~~~~~~~~~~~~~
.. autoclass:: Quantize
:members:

DeQuantize
~~~~~~~~~~~~~~~
.. autoclass:: DeQuantize
:members:

Linear
~~~~~~~~~~~~~~~
.. autoclass:: Linear
:members:

BatchNorm2d
~~~~~~~~~~~~~~~
.. autoclass:: BatchNorm2d
:members:

BatchNorm3d
~~~~~~~~~~~~~~~
.. autoclass:: BatchNorm3d
:members:

LayerNorm
~~~~~~~~~~~~~~~
.. autoclass:: LayerNorm
:members:

GroupNorm
~~~~~~~~~~~~~~~
.. autoclass:: GroupNorm
:members:

InstanceNorm1d
~~~~~~~~~~~~~~~
.. autoclass:: InstanceNorm1d
:members:

InstanceNorm2d
~~~~~~~~~~~~~~~
.. autoclass:: InstanceNorm2d
:members:

InstanceNorm3d
~~~~~~~~~~~~~~~
.. autoclass:: InstanceNorm3d
:members:


68 changes: 68 additions & 0 deletions docs/source/torch.quantization.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
.. _torch_quantization:

torch.quantization
------------------
.. automodule:: torch.quantization

This module implements the functions you call
directly to convert your model from FP32 to quantized form. For
example the :func:`~torch.quantization.prepare` is used in post training
quantization to prepares your model for the calibration step and
:func:`~torch.quantization.convert` actually converts the weights to int8 and
replaces the operations with their quantized counterparts. There are
other helper functions for things like quantizing the input to your
model and performing critical fusions like conv+relu.

Top-level quantization APIs
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: quantize
.. autofunction:: quantize_dynamic
.. autofunction:: quantize_qat
.. autofunction:: prepare
.. autofunction:: prepare_qat
.. autofunction:: convert
.. autoclass:: QConfig
.. autoclass:: QConfigDynamic

.. FIXME: The following doesn't display correctly.
.. autoattribute:: default_qconfig
Preparing model for quantization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: fuse_modules
.. autoclass:: QuantStub
.. autoclass:: DeQuantStub
.. autoclass:: QuantWrapper
.. autofunction:: add_quant_dequant

Utility functions
~~~~~~~~~~~~~~~~~
.. autofunction:: add_observer_
.. autofunction:: swap_module
.. autofunction:: propagate_qconfig_
.. autofunction:: default_eval_fn

Observers
~~~~~~~~~~~~~~~
.. autoclass:: ObserverBase
:members:
.. autoclass:: MinMaxObserver
.. autoclass:: MovingAverageMinMaxObserver
.. autoclass:: PerChannelMinMaxObserver
.. autoclass:: MovingAveragePerChannelMinMaxObserver
.. autoclass:: HistogramObserver
.. autoclass:: FakeQuantize
.. autoclass:: NoopObserver

Debugging utilities
~~~~~~~~~~~~~~~~~~~
.. autofunction:: get_observer_dict
.. autoclass:: RecordingObserver

.. currentmodule:: torch

.. autosummary::
:nosignatures:

nn.intrinsic

6 changes: 3 additions & 3 deletions docs/source/type_info.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,16 +19,16 @@ A :class:`torch.finfo` is an object that represents the numerical properties of

A :class:`torch.finfo` provides the following attributes:

========= ===== ========================================
========== ===== ========================================
Name Type Description
========= ===== ========================================
========== ===== ========================================
bits int The number of bits occupied by the type.
eps float The smallest representable number such that ``1.0 + eps != 1.0``.
max float The largest representable number.
min float The smallest representable number (typically ``-max``).
tiny float The smallest positive representable number.
resolution float The approximate decimal resolution of this type, i.e., ``10**-precision``.
========= ===== ========================================
========== ===== ========================================

.. note::
The constructor of :class:`torch.finfo` can be called without argument, in which case the class is created for the pytorch default dtype (as returned by :func:`torch.get_default_dtype`).
Expand Down

0 comments on commit b7bda23

Please sign in to comment.