Skip to content

Commit

Permalink
Rebuild Docs V0.8.0 (#8392)
Browse files Browse the repository at this point in the history
* rebuild for 5 module

* fix bug

* fix for doctree and content  in nn and

* fix

* fix

* fix

* add some

* fix for oneflow.rst

* update oneflow oneflow.nn

* update tensor

* update tensor module

* update

* test

* update

* update

* fix for undone desc

* docs: oneflow.utils.data (#8485)

* feat(utils.data): add oneflow.utils.data

* docs(dataloader): change the docstring of DataLoader

* docs(tensor): add methods to oneflow.Tensor document

* docs(optim): change docstring of optimizer and add a note to the doucument

* nn.graph

* fix for graph

* fix bug

* review nn and linalg document (#8515)

* docs(nn): add contents to oneflow.nn document

* docs(linalg): refactor oneflow.linalg document

* change attributes.rst and review nn.functional.rst (#8514)

* change attributes.rst and review nn.functional.rst

* reconstruction oneflow.cuda

* fix cuda and rebuild comm demo (#8582)

* update image

* add distributed

* oneembedding & refine graph

* update for sdisributed one_embedding

* fix rnn.py (#8616)

* 重构 oneflow.nn.init 文档 (#8622)

docs(nn.init): refactore nn.init document

* docs(nn.init): remove the comments

* docs(utils.data): remove the comments

* update and fix bug

* docs(review): refine the documents (#8646)

* docs(review): refine oneflow, nn, Tensor, nn.init, linalg, utils.data, optim modules

* docs(optim): modify the code examples

* docs(tensor): edit note

* 重构 oneflow.autograd 文档 (#8594)

* docs(autograd): refactor oneflow.autograd

* docs(autograd): edit "Default gradient layouts".

* docs(autograd): reedit "Default gradient layouts"

* docs(autograd): add comment

* docs(autograd): add reference

* update

* docs(tensor): change autoclass to autosummary

* update

* update

* add oneflow.linalg.diagonal (#8653)

* docs(linalg): add oneflow.linalg.diagonal

* update enviorment variable

* Update docs/source/distributed.rst

Co-authored-by: Houjiang Chen <chenhoujiangcug@gmail.com>

* Update docs/source/distributed.rst

Co-authored-by: Houjiang Chen <chenhoujiangcug@gmail.com>

* update enviorment variable

* update for ev & distributed

* update distribued

* update ev

* update distribute desc

* Update docs/source/distributed.rst

Co-authored-by: Houjiang Chen <chenhoujiangcug@gmail.com>

* update

* 修改 docstring 描述 (#8656)

* docs: move pytorch refernce to end

* docs: add some docstring

* docs(refs): add refs

* Update docs/source/distributed.rst

* updte for distributed details and environment_variable

* docs(docstring): Modify all reference links to version 1.10 (#8663)

* fix bug

* fix bug

* fix all warning

Co-authored-by: Guoliang Cheng <1876953310@qq.com>
Co-authored-by: liu xuan <85344642+laoliu97@users.noreply.github.com>
Co-authored-by: Guoliang Cheng <lmyybh_lazy@163.com>
Co-authored-by: laoliu97 <841637247@qq.com>
Co-authored-by: Yao Chi <later@usopp.net>
Co-authored-by: Houjiang Chen <chenhoujiangcug@gmail.com>
  • Loading branch information
7 people committed Jul 19, 2022
1 parent 2733168 commit e15a8bc
Show file tree
Hide file tree
Showing 72 changed files with 3,859 additions and 1,128 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
/build-*
/docs/build/
/docs/build-cn/
/docs/source/generated
/cmake-build-*
/dist
/third_party/
Expand Down
93 changes: 84 additions & 9 deletions docs/source/autograd.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,87 @@
oneflow.autograd
================================================
Functions and classes for autograd.
---------------------------------------------------
====================================================

.. The documentation is referenced from:
https://pytorch.org/docs/1.10/autograd.html
``oneflow.autograd`` provides classes and functions implementing automatic differentiation of arbitrary scalar
valued functions. It requires minimal changes to the existing code - you only need to declare ``Tensor`` s
for which gradients should be computed with the ``requires_grad=True`` keyword. As of now, we only support
autograd for floating point ``Tensor`` types ( half, float, double and bfloat16).


.. currentmodule:: oneflow.autograd
.. autoclass:: oneflow.autograd.Function
:members: apply,
:special-members: __call__,

.. automodule:: oneflow.autograd
:members: grad,
backward,
.. autosummary::
:toctree: generated
:nosignatures:

backward
grad

Locally disabling gradient computation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated
:nosignatures:

no_grad
enable_grad
set_grad_enabled
inference_mode

.. TODO(wyg): uncomment this after aligning accumulate grad
.. Default gradient layouts
.. ^^^^^^^^^^^^^^^^^^^^^^^^
.. A ``param.grad`` is accumulated by replacing ``.grad`` with a
.. new tensor ``.grad + new grad`` during :func:`oneflow.autograd.backward()` or
.. :func:`oneflow.Tensor.backward()`.
In-place operations on Tensors
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Supporting in-place operations in autograd is a hard matter, and we discourage
their use in most cases. Autograd's aggressive buffer freeing and reuse makes
it very efficient and there are very few occasions when in-place operations
actually lower memory usage by any significant amount. Unless you're operating
under heavy memory pressure, you might never need to use them.

Tensor autograd functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:nosignatures:

oneflow.Tensor.grad
oneflow.Tensor.requires_grad
oneflow.Tensor.is_leaf
oneflow.Tensor.backward
oneflow.Tensor.detach
oneflow.Tensor.register_hook
oneflow.Tensor.retain_grad

Function
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. autoclass:: Function
.. currentmodule:: oneflow.autograd
.. autosummary::
:toctree generated
:nosignatures:

Function.forward
Function.backward
Function.apply

Context method mixins
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When creating a new :class:`Function`, the following methods are available to `ctx`.

.. currentmodule:: oneflow.autograd.autograd_function
.. autosummary::
:toctree: generated
:nosignatures:

FunctionAutoGradCaptureState.mark_non_differentiable
FunctionAutoGradCaptureState.save_for_backward
FunctionAutoGradCaptureState.saved_tensors
17 changes: 0 additions & 17 deletions docs/source/comm.rst

This file was deleted.

10 changes: 10 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,16 @@
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.napoleon",
"sphinx.ext.intersphinx",
"recommonmark",
"sphinx.ext.autosummary",
"sphinx_copybutton",
]

# build the templated autosummary files
autosummary_generate = True
numpydoc_show_class_members = False

# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]

Expand Down Expand Up @@ -107,6 +113,10 @@
#
# html_sidebars = {}

# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
"python": ("https://docs.python.org/3", None),
}

# -- Options for HTMLHelp output ---------------------------------------------

Expand Down
61 changes: 42 additions & 19 deletions docs/source/cuda.rst
Original file line number Diff line number Diff line change
@@ -1,22 +1,45 @@
oneflow.cuda
===================================
ONEFLOW.CUDA
----------------------------------

.. The documentation is referenced from: https://pytorch.org/docs/1.10/cuda.html.
.. currentmodule:: oneflow.cuda
.. automodule:: oneflow.cuda
:members: is_available,
device_count,
current_device,
set_device,
synchronize,
manual_seed_all,
manual_seed,
empty_cache,
HalfTensor,
FloatTensor,
DoubleTensor,
BoolTensor,
ByteTensor,
CharTensor,
IntTensor,
LongTensor,

.. autosummary::
:toctree: generated
:nosignatures:

is_available
device_count
current_device
set_device
synchronize

.. note::
The :attr:`current_device` returns local rank as device index. It is different from the 'torch.current_device()' in PyTorch.


Random Number Generator
-------------------------
.. autosummary::
:toctree: generated
:nosignatures:

manual_seed_all
manual_seed


GPU tensor
-----------------------------
.. autosummary::
:toctree: generated
:nosignatures:

HalfTensor
FloatTensor
DoubleTensor
BoolTensor
ByteTensor
CharTensor
IntTensor
LongTensor
Loading

0 comments on commit e15a8bc

Please sign in to comment.