Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update dygraph auto_parallel en API docs. #59557

Merged
merged 1 commit into from
Dec 4, 2023

Conversation

wuhuachaocoding
Copy link
Contributor

PR types

Others

PR changes

Docs

Description

update dygraph auto_parallel en API docs.
Pcard-73145

Comment on lines 371 to 377
ReduceType.kRedSum
ReduceType.kRedMax
ReduceType.kRedMin
ReduceType.kRedProd
ReduceType.kRedAvg
ReduceType.kRedAny
ReduceType.kRedAll
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

改成一行行的段落吧,美观点
image

Suggested change
ReduceType.kRedSum
ReduceType.kRedMax
ReduceType.kRedMin
ReduceType.kRedProd
ReduceType.kRedAvg
ReduceType.kRedAny
ReduceType.kRedAll
- ReduceType.kRedSum
- ReduceType.kRedMax
- ReduceType.kRedMin
- ReduceType.kRedProd
- ReduceType.kRedAvg
- ReduceType.kRedAny
- ReduceType.kRedAll

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE

Comment on lines 379 to 388
Examples:
.. code-block:: python

>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial(dist.ReduceType.kRedSum)])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Examples.. code-block:: python代码片段层层都得保持缩进,参考 API文档英文模板 ,不然网页渲染会不正常。

Suggested change
Examples:
.. code-block:: python
>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial(dist.ReduceType.kRedSum)])
Examples:
.. code-block:: python
>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial(dist.ReduceType.kRedSum)])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE

Comment on lines 405 to 417
Examples:
.. code-block:: python

>>> import paddle.distributed as dist
>>> placements = [dist.Replicate(), dist.Shard(0), dist.Partial()]
>>> for p in placements:
>>> if isinstance(p, dist.Placement):
>>> if p.is_replicated():
>>> print("replicate.")
>>> elif p.is_shard():
>>> print("shard.")
>>> elif p.is_partial():
>>> print("partial.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理,需要缩进

Suggested change
Examples:
.. code-block:: python
>>> import paddle.distributed as dist
>>> placements = [dist.Replicate(), dist.Shard(0), dist.Partial()]
>>> for p in placements:
>>> if isinstance(p, dist.Placement):
>>> if p.is_replicated():
>>> print("replicate.")
>>> elif p.is_shard():
>>> print("shard.")
>>> elif p.is_partial():
>>> print("partial.")
Examples:
.. code-block:: python
>>> import paddle.distributed as dist
>>> placements = [dist.Replicate(), dist.Shard(0), dist.Partial()]
>>> for p in placements:
>>> if isinstance(p, dist.Placement):
>>> if p.is_replicated():
>>> print("replicate.")
>>> elif p.is_shard():
>>> print("shard.")
>>> elif p.is_partial():
>>> print("partial.")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE

Comment on lines 439 to 448
Examples:
.. code-block:: python

>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([[2, 4, 5], [0, 1, 3]], dim_names=['x', 'y'])
>>> a = paddle.to_tensor([[1,2,3],[5,6,7]])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Shard(0), dist.Shard(1)])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Examples:
.. code-block:: python
>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([[2, 4, 5], [0, 1, 3]], dim_names=['x', 'y'])
>>> a = paddle.to_tensor([[1,2,3],[5,6,7]])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Shard(0), dist.Shard(1)])
Examples:
.. code-block:: python
>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([[2, 4, 5], [0, 1, 3]], dim_names=['x', 'y'])
>>> a = paddle.to_tensor([[1,2,3],[5,6,7]])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Shard(0), dist.Shard(1)])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE

Comment on lines 465 to 474
Examples:
.. code-block:: python

>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Replicate()])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Examples:
.. code-block:: python
>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Replicate()])
Examples:
.. code-block:: python
>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Replicate()])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE

Comment on lines 491 to 500
Examples:
.. code-block:: python

>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial()])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Examples:
.. code-block:: python
>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial()])
Examples:
.. code-block:: python
>>> import paddle
>>> import paddle.distributed as dist
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])
>>> a = paddle.ones([10, 20])
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial()])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE

Copy link
Contributor

@sunzhongkai588 sunzhongkai588 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM,同步提供下中文文档~

Copy link
Contributor

@LiYuRio LiYuRio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@LiYuRio LiYuRio merged commit 6577edb into PaddlePaddle:develop Dec 4, 2023
29 checks passed
SigureMo pushed a commit to gouzil/Paddle that referenced this pull request Dec 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants