Skip to content

Commit

Permalink
New Custom Ops Documentation landing page (pytorch#127400)
Browse files Browse the repository at this point in the history
We create a new landing page for PyTorch custom ops (suggested by
jansel). All of our error messages will link here, and I'll work with
the docs team to see if we can boost SEO for this page.

NB: the landing page links some non-searchable webpages. Two of those
(the Python custom ops tutorial and C++ custom ops tutorial) will turn
into actual webpages when PyTorch 2.4 comes around. I'll make the third one
(the Custom Operators Manual) once it stabilizes (we continously add new
things to it and the length means that we might want to create a custom
website for it to make the presentation more ingestable).

Test Plan:
- view docs preview.
Pull Request resolved: pytorch#127400
Approved by: https://github.com/jansel
ghstack dependencies: pytorch#127291, pytorch#127292
  • Loading branch information
zou3519 authored and Aidyn-A committed May 30, 2024
1 parent 9ce4419 commit a00e0d1
Show file tree
Hide file tree
Showing 4 changed files with 79 additions and 24 deletions.
20 changes: 7 additions & 13 deletions docs/source/export.rst
Original file line number Diff line number Diff line change
Expand Up @@ -632,23 +632,17 @@ number of paths. In such cases, users will need to rewrite their code using
special control flow operators. Currently, we support :ref:`torch.cond <cond>`
to express if-else like control flow (more coming soon!).

Missing Meta Kernels for Operators
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Missing Fake/Meta/Abstract Kernels for Operators
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When tracing, a META implementation (or "meta kernel") is required for all
operators. This is used to reason about the input/output shapes for this
operator.
When tracing, a FakeTensor kernel (aka meta kernel, abstract impl) is
required for all operators. This is used to reason about the input/output shapes
for this operator.

To register a meta kernel for a C++ Custom Operator, please refer to
`this documentation <https://docs.google.com/document/d/1_W62p8WJOQQUzPsJYa7s701JXt0qf2OfLub2sbkHOaU/edit#heading=h.ahugy69p2jmz>`__.

The official API for registering custom meta kernels for custom ops implemented
in python is currently undergoing development. While the final API is being
refined, you can refer to the documentation
`here <https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0>`_.
Please see :func:`torch.library.register_fake` for more details.

In the unfortunate case where your model uses an ATen operator that is does not
have a meta kernel implementation yet, please file an issue.
have a FakeTensor kernel implementation yet, please file an issue.


Read More
Expand Down
5 changes: 4 additions & 1 deletion docs/source/library.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _torch-library-docs:

torch.library
===================================
.. py:module:: torch.library
Expand All @@ -9,7 +11,8 @@ custom operators, and extending operators defined with PyTorch's C++ operator
registration APIs (e.g. aten operators).

For a detailed guide on effectively using these APIs, please see
`this gdoc <https://docs.google.com/document/d/1W--T6wz8IY8fOI0Vm8BF44PdBgs283QvpelJZWieQWQ/edit>`_
Please see :ref:`custom-ops-landing-page`
for more details on how to effectively use these APIs.

Testing custom ops
------------------
Expand Down
56 changes: 56 additions & 0 deletions docs/source/notes/custom_operators.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
.. _custom-ops-landing-page:

PyTorch Custom Operators Landing Page
=====================================

PyTorch offers a large library of operators that work on Tensors (e.g. :func:`torch.add`,
:func:`torch.sum`, etc). However, you may wish to bring a new custom operation to PyTorch
and get it to work with subsystems like :func:`torch.compile`, autograd, and :func:`torch.vmap`.
In order to do so, you must register the custom operation with PyTorch via the Python
:ref:`torch-library-docs` or C++ TORCH_LIBRARY APIs.

TL;DR
-----

How do I author a custom op from Python?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

..
[comment] TODO(rzou): The following will be a link to a tutorial on the PyTorch tutorials site in 2.4
Please see the `Python Custom Operators tutorial <https://colab.research.google.com/drive/1xCh5BNHxGnutqGLMHaHwm47cbDL9CB1g#scrollTo=gg6WorNtKzeh>`_


How do I integrate custom C++ and/or CUDA code with PyTorch?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

..
[comment] TODO(rzou): The following will be a link to a tutorial on the PyTorch tutorials site in 2.4
Please see the `Custom C++ and CUDA Operators tutorial <https://docs.google.com/document/d/1-LdJZBzlxiF0Tm-8NfbyFvRJaofdwRgLcycXGmlIpS0>`_


For more details
^^^^^^^^^^^^^^^^

Please see `The Custom Operators Manual (gdoc) <https://docs.google.com/document/d/1_W62p8WJOQQUzPsJYa7s701JXt0qf2OfLub2sbkHOaU>`_
(we're working on moving the information to our docs site). We recommend that you
first read one of the tutorials above and then use the Custom Operators Manual as a reference;
it is not meant to be read head to toe.

When should I create a Custom Operator?
---------------------------------------
If your operation is expressible as a composition of built-in PyTorch operators
then please write it as a Python function and call it instead of creating a
custom operator. Use the operator registration APIs to create a custom op if you
are calling into some library that PyTorch doesn't understand (e.g. custom C/C++ code,
a custom CUDA kernel, or Python bindings to C/C++/CUDA extensions).

Why should I create a Custom Operator?
--------------------------------------

It is possible to use a C/C++/CUDA kernel by grabbing a Tensor's data pointer
and passing it to a pybind'ed kernel. However, this approach doesn't compose with
PyTorch subsystems like autograd, torch.compile, vmap, and more. In order
for an operation to compose with PyTorch subsystems, it must be registered
via the operator registration APIs.
22 changes: 12 additions & 10 deletions docs/source/notes/extending.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,18 @@ Extending PyTorch
In this note we'll cover ways of extending :mod:`torch.nn`,
:mod:`torch.autograd`, :mod:`torch`, and writing custom C++ extensions.

Adding new operators
--------------------

PyTorch offers a large library of operators that work on Tensors (e.g. :func:`torch.add`,
:func:`torch.sum`, etc). However, you may wish to bring a new custom operation to PyTorch
and have it behave like PyTorch's built-in operators. In order to do so, you must
register the custom operation with PyTorch via the Python :ref:`torch-library-docs` or C++ TORCH_LIBRARY
APIs.


Please see :ref:`custom-ops-landing-page` for more details.

.. _extending-autograd:

Extending :mod:`torch.autograd`
Expand Down Expand Up @@ -968,13 +980,3 @@ Which prints the following, with extra comments::
Dispatch Log: aten.mul.Tensor(*(tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]), 2), **{})
Dispatch Log: aten.detach.default(*(tensor([2., 2., 2., 2., 2., 2., 2., 2., 2., 2.]),), **{})
Dispatch Log: aten.detach.default(*(tensor([2., 2., 2., 2., 2., 2., 2., 2., 2., 2.]),), **{})


Writing custom C++ extensions
-----------------------------

See this
`PyTorch tutorial <https://pytorch.org/tutorials/advanced/cpp_extension.html>`_
for a detailed explanation and examples.

Documentations are available at :doc:`../cpp_extension`.

0 comments on commit a00e0d1

Please sign in to comment.