Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

修改了英文API文档 #48219

Merged
merged 33 commits into from
Dec 7, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
689898b
修改paddle.nn.dynamic_decode,paddle.nn.functional.diag_embed 示例
Atlantisming Nov 20, 2022
ef533c5
Merge branch 'PaddlePaddle:develop' into develop
Atlantisming Nov 21, 2022
1fee067
Merge branch 'PaddlePaddle:develop' into develop
Atlantisming Nov 21, 2022
e9dff97
Merge branch 'PaddlePaddle:develop' into develop
Atlantisming Nov 21, 2022
20071df
mma qk tensor_core (#48087)
carryyu Nov 21, 2022
2a0d07d
remove lrn which is not used in paddle 2.0 (#47945)
Vvsmile Nov 21, 2022
b8c14a7
replace scatter_nd and scatter_nd_add with paddle.scatter_nd and (#47…
Vvsmile Nov 21, 2022
6450a74
[PHI] Migrate mul_grad kernel (#48061)
Silv3S Nov 21, 2022
1698673
delete unnecessary shape and slice op (#48112)
RichardWooSJTU Nov 21, 2022
1c35c3a
Merge remote-tracking branch 'origin/develop' into my-branch
Atlantisming Nov 21, 2022
ccc5a5a
修改英文文档。
Atlantisming Nov 21, 2022
fc04ce6
Merge branch 'PaddlePaddle:develop' into develop
Atlantisming Nov 21, 2022
9336899
Merge branch 'develop' into my-branch
Atlantisming Nov 22, 2022
8f71143
修改segment operator等英文文档。
Atlantisming Nov 22, 2022
8f6b836
重新修改了paddle.einsum,paddle.unique_consecutive,
Atlantisming Nov 23, 2022
9619c78
重新修改了英文文档格式。;test=docs_preview
Atlantisming Nov 25, 2022
597cf5d
Update extension.py
Ligoml Nov 29, 2022
d1afde5
重新修改了英文文档格式。;test=docs_preview
Atlantisming Dec 1, 2022
74815e4
重新修改了英文文档格式。
Atlantisming Dec 2, 2022
235adca
Merge branch 'PaddlePaddle:develop' into my-branch
Atlantisming Dec 3, 2022
8068466
重新修改了英文文档格式。
Atlantisming Dec 3, 2022
b1d60fe
重新修改了英文文档格式。
Atlantisming Dec 3, 2022
d7ef701
update
SigureMo Dec 3, 2022
cbe7a05
test=docs_preview
SigureMo Dec 3, 2022
3426ec6
update formula; test=docs_preview
SigureMo Dec 3, 2022
fdd30e9
update formula; test=docs_preview
SigureMo Dec 3, 2022
a62cd0f
remove this operator; test=docs_preview
SigureMo Dec 3, 2022
017a45e
add hyper link; test=docs_preview
SigureMo Dec 3, 2022
c5a40e4
add default value; test=docs_preview
SigureMo Dec 3, 2022
9341553
update format; test=docs_preview
SigureMo Dec 3, 2022
d734777
empty commit; test=docs_preview
SigureMo Dec 3, 2022
fba0e71
fix codestyle issues; test=docs_preview
SigureMo Dec 3, 2022
1d5b5b2
empty commit; test=docs_preview
SigureMo Dec 3, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions python/paddle/device/cuda/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -355,8 +355,8 @@ def _set_current_stream(stream):
@signature_safe_contextmanager
def stream_guard(stream):
'''
**Notes**:
**This API only supports dygraph mode currently.**
Notes:
This API only supports dynamic graph mode currently.

A context manager that specifies the current stream context by the given stream.

Expand Down
3 changes: 2 additions & 1 deletion python/paddle/fluid/framework.py
Original file line number Diff line number Diff line change
Expand Up @@ -786,7 +786,8 @@ def disable_signal_handler():

Make sure you called paddle.disable_signal_handler() before using above mentioned frameworks.

Returns: None
Returns:
None

Examples:
.. code-block:: python
Expand Down
25 changes: 11 additions & 14 deletions python/paddle/fluid/layers/rnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -1805,26 +1805,23 @@ def dynamic_decode(
**kwargs: Additional keyword arguments. Arguments passed to `decoder.step`.

Returns:
tuple: A tuple( :code:`(final_outputs, final_states, sequence_lengths)` ) \
when `return_length` is True, otherwise a tuple( :code:`(final_outputs, final_states)` ). \
The final outputs and states, both are Tensor or nested structure of Tensor. \
`final_outputs` has the same structure and data types as the :code:`outputs` \
returned by :code:`decoder.step()` , and each Tenser in `final_outputs` \
is the stacked of all decoding steps' outputs, which might be revised \
by :code:`decoder.finalize()` if the decoder has implemented `finalize`. \
`final_states` is the counterpart at last time step of initial states \
returned by :code:`decoder.initialize()` , thus has the same structure \
with it and has tensors with same shapes and data types. `sequence_lengths` \
is an `int64` tensor with the same shape as `finished` returned \
by :code:`decoder.initialize()` , and it stores the actual lengths of \
all decoded sequences.

- final_outputs (Tensor, nested structure of Tensor), each Tensor in :code:`final_outputs` is the stacked of all decoding steps' outputs, which might be revised
by :code:`decoder.finalize()` if the decoder has implemented finalize.
And :code:`final_outputs` has the same structure and data types as the :code:`outputs`
returned by :code:`decoder.step()`

- final_states (Tensor, nested structure of Tensor), :code:`final_states` is the counterpart at last time step of initial states \
returned by :code:`decoder.initialize()` , thus has the same structure
with it and has tensors with same shapes and data types.

- sequence_lengths (Tensor), stores the actual lengths of all decoded sequences.
sequence_lengths is provided only if :code:`return_length` is True.

Examples:

SigureMo marked this conversation as resolved.
Show resolved Hide resolved
.. code-block:: python

import numpy as np
import paddle
from paddle.nn import BeamSearchDecoder, dynamic_decode
from paddle.nn import GRUCell, Linear, Embedding
Expand Down
3 changes: 3 additions & 0 deletions python/paddle/framework/framework.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,9 @@ def set_grad_enabled(mode):
Args:
mode(bool): whether to enable (`True`), or disable (`False`) grad.

Returns:
None.

Examples:
.. code-block:: python

Expand Down
44 changes: 32 additions & 12 deletions python/paddle/incubate/tensor/math.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,14 @@ def segment_sum(data, segment_ids, name=None):
r"""
Segment Sum Operator.

This operator sums the elements of input `data` which with
Sum the elements of input `data` which with
the same index in `segment_ids`.
It computes a tensor such that $out_i = \\sum_{j} data_{j}$
It computes a tensor such that

.. math::

out_i = \sum_{j \in \{segment\_ids_j == i \} } data_{j}

where sum is over j such that `segment_ids[j] == i`.

Args:
Expand All @@ -45,7 +50,7 @@ def segment_sum(data, segment_ids, name=None):
For more information, please refer to :ref:`api_guide_Name`.

Returns:
output (Tensor): the reduced result.
Tensor, the Segment Sum result.

Examples:

Expand Down Expand Up @@ -93,11 +98,16 @@ def segment_sum(data, segment_ids, name=None):
)
def segment_mean(data, segment_ids, name=None):
r"""
Segment mean Operator.
Segment Mean Operator.

Ihis operator calculate the mean value of input `data` which
with the same index in `segment_ids`.
It computes a tensor such that $out_i = \\frac{1}{n_i} \\sum_{j} data[j]$
It computes a tensor such that

.. math::

out_i = \mathop{mean}_{j \in \{segment\_ids_j == i \} } data_{j}

where sum is over j such that 'segment_ids[j] == i' and $n_i$ is the number
of all index 'segment_ids[j] == i'.

Expand All @@ -110,7 +120,7 @@ def segment_mean(data, segment_ids, name=None):
For more information, please refer to :ref:`api_guide_Name`.

Returns:
output (Tensor): the reduced result.
Tensor, the Segment Mean result.

Examples:

Expand Down Expand Up @@ -161,9 +171,14 @@ def segment_min(data, segment_ids, name=None):
r"""
Segment min operator.

This operator calculate the minimum elements of input `data` which with
Calculate the minimum elements of input `data` which with
the same index in `segment_ids`.
It computes a tensor such that $out_i = \\min_{j} data_{j}$
It computes a tensor such that

.. math::

out_i = \min_{j \in \{segment\_ids_j == i \} } data_{j}

where min is over j such that `segment_ids[j] == i`.

Args:
Expand All @@ -175,7 +190,7 @@ def segment_min(data, segment_ids, name=None):
For more information, please refer to :ref:`api_guide_Name`.

Returns:
output (Tensor): the reduced result.
Tensor, the minimum result.

Examples:

Expand Down Expand Up @@ -227,9 +242,14 @@ def segment_max(data, segment_ids, name=None):
r"""
Segment max operator.

This operator calculate the maximum elements of input `data` which with
Calculate the maximum elements of input `data` which with
the same index in `segment_ids`.
It computes a tensor such that $out_i = \\max_{j} data_{j}$
It computes a tensor such that

.. math::

out_i = \max_{j \in \{segment\_ids_j == i \} } data_{j}

where max is over j such that `segment_ids[j] == i`.

Args:
Expand All @@ -241,7 +261,7 @@ def segment_max(data, segment_ids, name=None):
For more information, please refer to :ref:`api_guide_Name`.

Returns:
output (Tensor): the reduced result.
Tensor, the maximum result.

Examples:

Expand Down
84 changes: 42 additions & 42 deletions python/paddle/nn/functional/extension.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@

def diag_embed(input, offset=0, dim1=-2, dim2=-1):
"""
This OP creates a tensor whose diagonals of certain 2D planes (specified by dim1 and dim2)
Creates a tensor whose diagonals of certain 2D planes (specified by dim1 and dim2)
are filled by ``input``. By default, a 2D plane formed by the last two dimensions
of the returned tensor will be selected.

Expand All @@ -61,48 +61,48 @@ def diag_embed(input, offset=0, dim1=-2, dim2=-1):
Examples:
SigureMo marked this conversation as resolved.
Show resolved Hide resolved
.. code-block:: python

import paddle
import paddle.nn.functional as F
import numpy as np

diag_embed = np.random.randn(2, 3).astype('float32')
# [[ 0.7545889 , -0.25074545, 0.5929117 ],
# [-0.6097662 , -0.01753256, 0.619769 ]]

data1 = F.diag_embed(diag_embed)
data1.numpy()
# [[[ 0.7545889 , 0. , 0. ],
# [ 0. , -0.25074545, 0. ],
# [ 0. , 0. , 0.5929117 ]],

# [[-0.6097662 , 0. , 0. ],
# [ 0. , -0.01753256, 0. ],
# [ 0. , 0. , 0.619769 ]]]

data2 = F.diag_embed(diag_embed, offset=-1, dim1=0, dim2=2)
data2.numpy()
# [[[ 0. , 0. , 0. , 0. ],
# [ 0.7545889 , 0. , 0. , 0. ],
# [ 0. , -0.25074545, 0. , 0. ],
# [ 0. , 0. , 0.5929117 , 0. ]],
#
# [[ 0. , 0. , 0. , 0. ],
# [-0.6097662 , 0. , 0. , 0. ],
# [ 0. , -0.01753256, 0. , 0. ],
# [ 0. , 0. , 0.619769 , 0. ]]]

data3 = F.diag_embed(diag_embed, offset=1, dim1=0, dim2=2)
data3.numpy()
# [[[ 0. , 0.7545889 , 0. , 0. ],
# [ 0. , -0.6097662 , 0. , 0. ]],
#
# [[ 0. , 0. , -0.25074545, 0. ],
# [ 0. , 0. , -0.01753256, 0. ]],
#
# [[ 0. , 0. , 0. , 0.5929117 ],
# [ 0. , 0. , 0. , 0.619769 ]],
#
# [[ 0. , 0. , 0. , 0. ],
# [ 0. , 0. , 0. , 0. ]]]

diag_embed_input = paddle.arange(6)

diag_embed_output1 = F.diag_embed(diag_embed_input)
print(diag_embed_output1)
# Tensor(shape=[6, 6], dtype=int64, place=Place(cpu), stop_gradient=True,
# [[0, 0, 0, 0, 0, 0],
# [0, 1, 0, 0, 0, 0],
# [0, 0, 2, 0, 0, 0],
# [0, 0, 0, 3, 0, 0],
# [0, 0, 0, 0, 4, 0],
# [0, 0, 0, 0, 0, 5]])

diag_embed_output2 = F.diag_embed(diag_embed_input, offset=-1, dim1=0,dim2=1 )
print(diag_embed_output2)
# Tensor(shape=[7, 7], dtype=int64, place=Place(cpu), stop_gradient=True,
# [[0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0],
# [0, 1, 0, 0, 0, 0, 0],
# [0, 0, 2, 0, 0, 0, 0],
# [0, 0, 0, 3, 0, 0, 0],
# [0, 0, 0, 0, 4, 0, 0],
# [0, 0, 0, 0, 0, 5, 0]])

diag_embed_input_2dim = paddle.reshape(diag_embed_input,[2,3])
print(diag_embed_input_2dim)
# Tensor(shape=[2, 3], dtype=int64, place=Place(cpu), stop_gradient=True,
# [[0, 1, 2],
# [3, 4, 5]])
diag_embed_output3 = F.diag_embed(diag_embed_input_2dim,offset= 0, dim1=0, dim2=2 )
print(diag_embed_output3)
# Tensor(shape=[3, 2, 3], dtype=int64, place=Place(cpu), stop_gradient=True,
# [[[0, 0, 0],
# [3, 0, 0]],

# [[0, 1, 0],
# [0, 4, 0]],

# [[0, 0, 2],
# [0, 0, 5]]])
"""
if not isinstance(input, Variable):
input = assign(input)
Expand Down
31 changes: 16 additions & 15 deletions python/paddle/tensor/einsum.py
Original file line number Diff line number Diff line change
Expand Up @@ -868,7 +868,7 @@ def einsum(equation, *operands):

einsum(equation, *operands)

The current version of this API should be used in dygraph only mode.
The current version of this API should be used in dynamic graph only mode.

Einsum offers a tensor operation API which allows using the Einstein summation
convention or Einstain notation. It takes as input one or multiple tensors and
Expand Down Expand Up @@ -901,20 +901,21 @@ def einsum(equation, *operands):
dimensions into broadcasting dimensions.
- Singular labels are called free labels, duplicate are dummy labels. Dummy labeled
dimensions will be reduced and removed in the output.
- Output labels can be explicitly specified on the right hand side of `->` or omitted. In the latter case, the output labels will be inferred from the input labels.
- Inference of output labels
- Broadcasting label `...`, if present, is put on the leftmost position.
- Free labels are reordered alphabetically and put after `...`.
- On explicit output labels
- If broadcasting is enabled, then `...` must be present.
- The output labels can be an empty, an indication to output as a scalar
the sum over the original output.
- Non-input labels are invalid.
- Duplicate labels are invalid.
- For any dummy label which is present for the output, it's promoted to
a free label.
- For any free label which is not present for the output, it's lowered to
a dummy label.
- Output labels can be explicitly specified on the right hand side of `->` or omitted.
In the latter case, the output labels will be inferred from the input labels.
- Inference of output labels
- Broadcasting label `...`, if present, is put on the leftmost position.
- Free labels are reordered alphabetically and put after `...`.
- On explicit output labels
- If broadcasting is enabled, then `...` must be present.
- The output labels can be an empty, an indication to output as a scalar
the sum over the original output.
- Non-input labels are invalid.
- Duplicate labels are invalid.
- For any dummy label which is present for the output, it's promoted to
a free label.
- For any free label which is not present for the output, it's lowered to
a dummy label.

- Examples
- '...ij, ...jk', where i and k are free labels, j is dummy. The output label
Expand Down
9 changes: 7 additions & 2 deletions python/paddle/tensor/linalg.py
Original file line number Diff line number Diff line change
Expand Up @@ -2030,16 +2030,21 @@ def svd(x, full_matrices=False, name=None):
where `...` is zero or more batch dimensions. N and M can be arbitraty
positive number. Note that if x is sigular matrices, the grad is numerical
instable. The data type of x should be float32 or float64.
full_matrices (bool): A flag to control the behavor of svd.
full_matrices (bool, optional): A flag to control the behavor of svd.
If full_matrices = True, svd op will compute full U and V matrics,
which means shape of U is `[..., N, N]`, shape of V is `[..., M, M]`. K = min(M, N).
If full_matrices = False, svd op will use a economic method to store U and V.
which means shape of U is `[..., N, K]`, shape of V is `[..., M, K]`. K = min(M, N).
Default value is False.
name (str, optional): Name for the operation (optional, default is None).
For more information, please refer to :ref:`api_guide_Name`.
Ligoml marked this conversation as resolved.
Show resolved Hide resolved

Returns:
Tuple of 3 tensors: (U, S, VH). VH is the conjugate transpose of V. S is the singlar value vectors of matrics with shape `[..., K]`
- U (Tensor), is the singular value decomposition result U.
- S (Tensor), is the singular value decomposition result S.
- VH (Tensor), VH is the conjugate transpose of V, which is the singular value decomposition result V.

Tuple of 3 tensors(U, S, VH): VH is the conjugate transpose of V. S is the singlar value vectors of matrics with shape `[..., K]`
SigureMo marked this conversation as resolved.
Show resolved Hide resolved

Examples:
.. code-block:: python
Expand Down
13 changes: 9 additions & 4 deletions python/paddle/tensor/manipulation.py
Original file line number Diff line number Diff line change
Expand Up @@ -2278,12 +2278,12 @@ def unique_consecutive(
dtype="int64",
name=None,
):
r"""
"""
Eliminates all but the first element from every consecutive group of equivalent elements.

Note:
This function is different from :func:`paddle.unique` in the sense that this function
only eliminates consecutive duplicate values. This semantics is similar to `std::unique` in C++.
This function is different from :ref:`api_paddle_unique` in the sense that this function
only eliminates consecutive duplicate values. This semantics is similar to :ref:`api_paddle_unique` in C++.

Args:
x(Tensor): the input tensor, it's data type should be float32, float64, int32, int64.
Expand All @@ -2299,7 +2299,12 @@ def unique_consecutive(
:ref:`api_guide_Name`. Default is None.

Returns:
tuple (out, inverse, counts). `out` is the unique consecutive tensor for `x`. `inverse` is provided only if `return_inverse` is True. `counts` is provided only if `return_counts` is True.
- out (Tensor), the unique consecutive tensor for x.
- inverse (Tensor), the element of the input tensor corresponds to
the index of the elements in the unique consecutive tensor for x.
inverse is provided only if return_inverse is True.
- counts (Tensor), the counts of the every unique consecutive element in the input tensor.
counts is provided only if return_counts is True.

Example:
.. code-block:: python
Expand Down
10 changes: 7 additions & 3 deletions python/paddle/tensor/math.py
Original file line number Diff line number Diff line change
Expand Up @@ -3450,9 +3450,13 @@ def cumprod(x, dim=None, dtype=None, name=None):

Args:
x (Tensor): the input tensor need to be cumproded.
dim (int): the dimension along which the input tensor will be accumulated. It need to be in the range of [-x.rank, x.rank), where x.rank means the dimensions of the input tensor x and -1 means the last dimension.
dtype (str, optional): The data type of the output tensor, can be float32, float64, int32, int64, complex64, complex128. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. The default value is None.
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
dim (int, optional): the dimension along which the input tensor will be accumulated. It need to be in the range of [-x.rank, x.rank),
where x.rank means the dimensions of the input tensor x and -1 means the last dimension.
dtype (str, optional): The data type of the output tensor, can be float32, float64, int32, int64, complex64,
complex128. If specified, the input tensor is casted to dtype before the operation is performed.
This is useful for preventing data type overflows. The default value is None.
name (str, optional): Name for the operation (optional, default is None). For more information,
please refer to :ref:`api_guide_Name`.

Returns:
Tensor, the result of cumprod operator.
Expand Down