Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hackathon No.26 #4301

Merged
merged 28 commits into from Jun 10, 2022
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
1658baa
Create TripletMarginLoss_cn.rst
yangguohao Mar 13, 2022
4447200
Update Overview_cn.rst
yangguohao Mar 13, 2022
c70b155
Merge branch 'PaddlePaddle:develop' into triplet_margin_loss
yangguohao Apr 17, 2022
36f86b9
Update TripletMarginLoss_cn.rst
yangguohao Apr 17, 2022
56bbbbb
Merge branch 'PaddlePaddle:develop' into triplet_margin_loss
yangguohao Apr 27, 2022
f77ee23
Create triplet_margin_loss_cn.rst
yangguohao Apr 27, 2022
c9ae1a7
Update TripletMarginLoss_cn.rst
yangguohao Apr 27, 2022
0ae27b2
Update Overview_cn.rst
yangguohao Apr 27, 2022
73ef28d
Merge branch 'PaddlePaddle:develop' into triplet_margin_loss
yangguohao Apr 28, 2022
e142011
Update triplet_margin_loss_cn.rst
yangguohao May 5, 2022
03d5db6
Update triplet_margin_loss_cn.rst
yangguohao May 5, 2022
7951669
Update TripletMarginLoss_cn.rst
yangguohao May 5, 2022
0ffaad6
Update TripletMarginLoss_cn.rst
yangguohao May 6, 2022
7a6465d
Update triplet_margin_loss_cn.rst
yangguohao May 6, 2022
73fbccf
Update TripletMarginLoss_cn.rst
yangguohao May 7, 2022
8b38389
Update triplet_margin_loss_cn.rst
yangguohao May 9, 2022
a1ddf14
Update triplet_margin_loss_cn.rst
yangguohao May 9, 2022
df1ec4e
Update triplet_margin_loss_cn.rst
yangguohao May 9, 2022
62f96f0
Update Overview_cn.rst
yangguohao May 17, 2022
7b965be
Update TripletMarginLoss_cn.rst
yangguohao May 17, 2022
c69f80b
Update triplet_margin_loss_cn.rst
yangguohao May 17, 2022
110fb36
Update TripletMarginLoss_cn.rst
Ligoml May 18, 2022
1f2f74d
Update TripletMarginLoss_cn.rst
Ligoml May 18, 2022
ba2d550
Update triplet_margin_loss_cn.rst
Ligoml May 18, 2022
915461e
Update TripletMarginLoss_cn.rst
yangguohao May 19, 2022
df72325
Update triplet_margin_loss_cn.rst
yangguohao May 19, 2022
dc19e06
Update TripletMarginLoss_cn.rst
yangguohao May 23, 2022
0aabfdf
Update triplet_margin_loss_cn.rst
yangguohao May 23, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/api/paddle/nn/Overview_cn.rst
Expand Up @@ -256,6 +256,7 @@ Loss层
" :ref:`paddle.nn.MSELoss <cn_api_paddle_nn_MSELoss>` ", "均方差误差损失层"
" :ref:`paddle.nn.NLLLoss <cn_api_nn_loss_NLLLoss>` ", "NLLLoss层"
" :ref:`paddle.nn.SmoothL1Loss <cn_api_paddle_nn_SmoothL1Loss>` ", "平滑L1损失层"
" :ref:`paddle.nn.TripletMarginLoss <cn_api_paddle_nn_TripletMarginLoss>` ", "TripletMarginLoss层"

.. _vision_layers:

Expand Down Expand Up @@ -475,7 +476,7 @@ Embedding相关函数
" :ref:`paddle.nn.functional.smooth_l1_loss <cn_paddle_nn_functional_loss_smooth_l1>` ", "用于计算平滑L1损失"
" :ref:`paddle.nn.functional.softmax_with_cross_entropy <cn_api_fluid_layers_softmax_with_cross_entropy>` ", "将softmax操作、交叉熵损失函数的计算过程进行合并"
" :ref:`paddle.nn.functional.margin_cross_entropy <cn_api_paddle_nn_functional_margin_cross_entropy>` ", "支持 ``Arcface``,``Cosface``,``Sphereface`` 的结合 Margin 损失函数"

" :ref:`paddle.nn.functional.triplet_margin_loss <cn_paddle_nn_functional_triplet_margin_loss>` ", "用于计算TripletMarginLoss"
.. _common_functional:

公用方法
Expand Down
48 changes: 48 additions & 0 deletions docs/api/paddle/nn/TripletMarginLoss_cn.rst
@@ -0,0 +1,48 @@
.. _cn_api_paddle_nn_TripletMarginLoss:

TripletMarginLoss
-------------------------------

.. py:class:: paddle.nn.TripletMarginLoss(margin: float = 1.0, p: float = 2., eps: float = 1e-6, swap: bool = False,reduction: str = 'mean')

创建一个TripletMarginLoss的可调用类。通过计算输入 `input` 和 `positive` 和 `negative` 间的 `triplet margin loss` 损失,测量样本之间,即 `input` 与 `positive examples` 和 `negative examples` 的相对相似性。

损失函数按照下列公式计算

.. math::
L(input, pos, neg) = \max \{d(input_i, pos_i) - d(input_i, neg_i) + {\rm margin}, 0\}


其中的

.. math::
d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p


``p`` 为距离函数的范数。 ``margin`` 为(input,positive)与(input,negative)的距离间隔。

最后,添加 `reduce` 操作到前面的输出Out上。当 `reduction` 为 `none` 时,直接返回最原始的 `Out` 结果。当 `reduction` 为 `mean` 时,返回输出的均值 :math:`Out = MEAN(Out)` 。当 `reduction` 为 `sum` 时,返回输出的求和 :math:`Out = SUM(Out)` 。


参数
:::::::::
- **p** (float,可选) - 手动指定范数,默认为2。
- **swap** (bool,可选) - 默认为False。
- **margin** (float,可选) - 手动指定间距,默认为1。
- **reduction** (str,可选) - 指定应用于输出结果的计算方式,可选值有: ``'none'``, ``'mean'``, ``'sum'`` 。默认为 ``'mean'``,计算 Loss 的均值;设置为 ``'sum'`` 时,计算 Loss 的总和;设置为 ``'none'`` 时,则返回原始Loss。
- **name** (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name` 。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 每一个参数都需要介绍,swap也需要
  • 中英文标点不要混用「,」-->「,」,参数末尾注意加「。」
  • 注意中英文文档需要同步修改

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改


形状
:::::::::
- **input** (Tensor) - :math:`[N, *]` , 其中N是batch_size, `*` 是任意其他维度。数据类型是float32、float64。
- **positive** (Tensor) - :math:`[N, *]` ,标签 ``positive`` 的维度、数据类型与输入 ``input`` 相同。
- **negative** (Tensor) - :math:`[N, *]` ,标签 ``negative`` 的维度、数据类型与输入 ``input`` 相同。
- **output** (Tensor) - 输出的Tensor。如果 :attr:`reduction` 是 ``'none'``, 则输出的维度为 :math:`[N, *]` , 与输入 ``input`` 的形状相同。如果 :attr:`reduction` 是 ``'mean'`` 或 ``'sum'``, 则输出的维度为 :math:`[1]` 。

返回
:::::::::
返回计算TripletMarginLoss的可调用对象。

代码示例
:::::::::
COPY-FROM: paddle.nn.TripletMarginLoss
52 changes: 52 additions & 0 deletions docs/api/paddle/nn/functional/triplet_margin_loss_cn.rst
@@ -0,0 +1,52 @@
.. _cn_api_paddle_nn_functional_triplet_margin_loss:

triplet_margin_loss
-------------------------------

.. py:class:: paddle.nn.functional.triplet_margin_loss(input, positive, negative, p:float = 2.0, margin: float = 1.0, swap: bool = False, reduction: str = 'mean')

该 api 计算输入 `input` 和 `positive` 和 `negative` 间的 `triplet margin loss` 损失,测量 `input`与 `positive examples` 和 `negative examples` 之间的相对相似性。所有输入张量的形状都为 :math:`(N, *)`,`*` 是任意其他维度。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

注意 ``需要后面加空格才会生效

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改



损失函数按照下列公式计算

.. math::
L(input, pos, neg) = \max \{d(input_i, pos_i) - d(input_i, neg_i) + {\rm margin}, 0\}

距离函数为

.. math::
d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_2



``p`` 为距离函数的范数。 ``margin`` 为(input,positive)与(input,negative)的距离间隔, ``swap`` 的内容可以看论文 `Learning shallow convolutional feature descriptors with triplet losses <http://www.bmva.org/bmvc/2016/papers/paper119/paper119.pdf>`_。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注意文档不要过于口语化:swap 的内容详见论文


最后,添加 `reduce` 操作到前面的输出Out上。当 `reduction` 为 `none` 时,直接返回最原始的 `Out` 结果。当 `reduction` 为 `mean` 时,返回输出的均值 :math:`Out = MEAN(Out)` 。当 `reduction` 为 `sum` 时,返回输出的求和 :math:`Out = SUM(Out)` 。


参数
:::::::::
- **input** (Tensor) - :math:`[N, * ]` , 其中N是batch_size, `*` 是任意其他维度。数据类型是float32、float64。
- **positive** (Tensor) - :math:`[N, *]` ,正样本。
- **negative** (Tensor) - :math:`[N, *]` ,负样本。
- **p** (float,可选) - 手动指定范数,默认为2。
- **swap** (bool,可选) - 默认为False。
- **margin** (float,可选) - 手动指定间距,默认为1。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

swapmargin换一下,注意和源码的参数顺序保持一致

- **reduction** (str,可选) - 指定应用于输出结果的计算方式,可选值有: ``'none'``, ``'mean'``, ``'sum'`` 。默认为 ``'mean'``,计算 Loss 的均值;设置为 ``'sum'`` 时,计算 Loss 的总和;设置为 ``'none'`` 时,则返回原始Loss。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

, -->

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

- **name** (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name` 。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

源码中有name参数吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

有name参数


形状
:::::::::
- **input** (Tensor) - :math:`[N, * ]` , 其中N是batch_size, `*` 是任意其他维度。数据类型是float32、float64。
- **positive** (Tensor) - :math:`[N, *]` ,标签 ``positive`` 的维度、数据类型与输入 ``input`` 相同。
- **negative** (Tensor) - :math:`[N, *]` ,标签 ``negative`` 的维度、数据类型与输入 ``input`` 相同。
- **output** (Tensor) - 输出的Tensor。如果 :attr:`reduction` 是 ``'none'``, 则输出的维度为 :math:`[N, *]` , 与输入 ``input`` 的形状相同。如果 :attr:`reduction` 是 ``'mean'`` 或 ``'sum'``, 则输出的维度为 :math:`[1]` 。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

, -->

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改


返回
:::::::::
返回计算的Loss。

代码示例
:::::::::
COPY-FROM: paddle.nn.functional.triplet_margin_loss