Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add lrn layer #9157

Merged
merged 10 commits into from
Mar 26, 2018
57 changes: 57 additions & 0 deletions doc/fluid/dev/src/lrn.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


def lrn(input, n=5, k=1.0, alpha=1e-4, beta=0.75, name=None):
"""
**Local Response Normalization Operator**

Refer to `ImageNet Classification with Deep Convolutional Neural Networks
<https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf>`_

The formula is as follows:

.. math::

Output(i, x, y) = Input(i, x, y) / \left(
k + \alpha \sum\limits^{\min(C, c + n/2)}_{j = \max(0, c - n/2)}
(Input(j, x, y))^2 \right)^{\beta}

In the above equation:

* :math:`n`: The number of channels to sum over.
* :math:`k`: The offset (usually positive to avoid dividing by 0).
* :math:`alpha`: The scaling parameter.
* :math:`beta`: The exponent.

Args:
input (Variable): The input tensor of this layer. The dimension of input tensor must be 4 and it's order should be 'NCHW'.
n (int, default 5): The number of channels to sum over.
k (float, default 1.0): An offset (usually positive to avoid dividing by 0).
alpha (float, default 1e-4): The scaling parameter.
beta (float, default 0.75): The exponent.
name (str, default None): A name for this operation.

Raises:
ValueError: If rank of the input tensor is not 4.

Returns:
A tensor variable storing the transformation result.

Examples:
.. code-block:: python

data = fluid.layers.data(name="data", shape=[3, 112, 112], dtype="float32")
lrn = fluid.layers.lrn(input=data)
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please delete this file.

70 changes: 70 additions & 0 deletions python/paddle/fluid/layers/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@
'smooth_l1',
'one_hot',
'autoincreased_step_counter',
'lrn',
]


Expand Down Expand Up @@ -3292,3 +3293,72 @@ def autoincreased_step_counter(counter_name=None, begin=1, step=1):
counter.stop_gradient = True

return counter


def lrn(input, n=5, k=1.0, alpha=1e-4, beta=0.75, name=None):
"""
**Local Response Normalization Operator**

Refer to `ImageNet Classification with Deep Convolutional Neural Networks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 请补充api的功能介绍。
  2. 参考文献 放置 参数介绍前 公式符号介绍后。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

<https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf>`_

The formula is as follows:

.. math::

Output(i, x, y) = Input(i, x, y) / \left(
k + \alpha \sum\limits^{\min(C, c + n/2)}_{j = \max(0, c - n/2)}
(Input(j, x, y))^2 \right)^{\beta}

In the above equation:

* :math:`n`: The number of channels to sum over.
* :math:`k`: The offset (usually positive to avoid dividing by 0).
* :math:`alpha`: The scaling parameter.
* :math:`beta`: The exponent parameter.

Args:
input (Variable): The input tensor of this layer, and the dimension of input tensor must be 4.
n (int, default 5): The number of channels to sum over.
k (float, default 1.0): An offset (usually positive to avoid dividing by 0).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid being divided by 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

alpha (float, default 1e-4): The scaling parameter.
beta (float, default 0.75): The exponent.
name (str, default None): A name for this operation.

Raises:
ValueError: If rank of the input tensor is not 4.

Returns:
A tensor variable storing the transformation result.

Examples:
.. code-block:: python

data = fluid.layers.data(name="data", shape=[3, 112, 112], dtype="float32")
lrn = fluid.layers.lrn(input=data)
"""
helper = LayerHelper('lrn', **locals())
dtype = helper.input_dtype()
input_shape = input.shape
dims = len(input_shape)

if dims != 4:
raise ValueError(
"dims of input must be 4(not %d), and it's order must be NCHW" %
(dims))

mid_out = helper.create_tmp_variable(dtype=dtype, stop_gradient=True)
lrn_out = helper.create_tmp_variable(dtype)
helper.append_op(
type="lrn",
inputs={"X": input},
outputs={
"Out": lrn_out,
"MidOut": mid_out,
},
attrs={"n": n,
"k": k,
"alpha": alpha,
"beta": beta})

return lrn_out
7 changes: 7 additions & 0 deletions python/paddle/fluid/tests/unittests/test_layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,13 @@ def test_softmax(self):
self.assertIsNotNone(layers.softmax(hid))
print(str(program))

def test_lrn(self):
program = Program()
with program_guard(program):
data = layers.data(name='data', shape=[6, 2, 2], dtype='float32')
self.assertIsNotNone(layers.lrn(data))
print(str(program))

def test_get_places(self):
program = Program()
with program_guard(program):
Expand Down