-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add pad func #26106
add pad func #26106
Conversation
Thanks for your contribution! |
8e1ca75
to
53a713f
Compare
""" | ||
data_format = data_format.upper() | ||
|
||
input_dim = len(input.shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里可以加个判断,当input的维度数和pad的元素数不一致时,raise ValueError,不然报错会到底层
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pad为var的时候,shape可能为[1, -1],此时无法在编译时拿到length,所以不太好判断~
elif input_dim == 4: | ||
pad = concat([pad, zeros((2, ), dtype="int32")], axis=0) | ||
unsqueezed_dim = [1] | ||
input = unsqueeze(input, axes=unsqueezed_dim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里也加一个
else :
raise ValueError, "data_format should be in one of "
"[NCL, NCHW, NCDHW, NLC, NHWC, NDHWC] but got {}".format(
data_format)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thx
python/paddle/nn/layer/common.py
Outdated
@@ -254,91 +254,299 @@ def forward(self, input): | |||
return out | |||
|
|||
|
|||
class Pad2D(layers.Layer): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
之前的Pad2D API是不需要了吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已经补充回来了,thx
self.assertRaises(Exception, test_reflect_2) | ||
|
||
self.assertRaises(Exception, test_reflect_3) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是否需要加个pad1d的测试函数
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是测试pad1d的:TestPad1dClass
paddle/fluid/operators/pad3d_op.cc
Outdated
for (int c = 0; c < channels; ++c) { | ||
for (int out_d = 0; out_d < out_depth; ++out_d) { | ||
for (int out_h = 0; out_h < out_height; ++out_h) { | ||
for (int out_w = 0; out_w < out_width; ++out_w) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use std::memcpy
for innermost dim? i.e., copy w
(the last stride) elements in one go.
paddle/fluid/operators/pad3d_op.cc
Outdated
} | ||
|
||
template <typename T> | ||
void Pad3DConstNDHWC(const T* in_data, const int num, const int channels, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can those two be combined? the only difference is the last stride (w
vs w*c
)
paddle/fluid/operators/pad3d_op.cc
Outdated
} | ||
|
||
template <typename T> | ||
void Pad3DReflectNCDHW(const T* in_data, const int num, const int channels, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function for different mode can be merged? suggestion:
mode
should be a template parameter, add templated function to calculate the padding value for given coordinates (e.g., something likeget_pad_value(x, y)
)- fill the padded part of the canvas with the above method
- iterate over
c
->h
or (1
-h
) and memcpyw
orc * w
for the inner part of the canvas
paddle/fluid/operators/pad3d_op.cc
Outdated
} | ||
|
||
template <typename T> | ||
void Pad3DGradConstNCDHW(T* d_in_data, const int num, const int channels, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
merge grad functions as above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thx
paddle/fluid/operators/pad3d_op.cc
Outdated
} | ||
} | ||
|
||
static inline void GetPaddings(int* paddings, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
simply return const int pointer to tensor.data or vector.data aspads
are read only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thx
using framework::Tensor; | ||
|
||
template <typename T> | ||
__global__ void Pad3DConstNCDHW(const int nthreads, const T* in_data, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
merge functions as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
difference being the kernel should iterate over the padded canvas, if the current coords falls in padded part, use templated method to calculate the pad value, otherwise read from original tensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but keep the kernels for different data layout separated as it would affect coalesced data access
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems that function template pointer can not work properly in cuda codes.
raise ValueError, "data_format should be in one of " | ||
"[NCL, NCHW, NCDHW, NLC, NHWC, NDHWC] but got {}".format( | ||
data_format) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add fast path for dygraph mode. check the guide line for detailed info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thx
python/paddle/nn/layer/common.py
Outdated
|
||
class Pad(layers.Layer): | ||
""" | ||
:alias_main: paddle.nn.Pad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:alais annotation are no longer needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed, thx
python/paddle/nn/layer/common.py
Outdated
self._pad = pad | ||
self._name = name | ||
|
||
def forward(self, input): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
input
-> x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thx
in_h = min(in_h, 2 * in_height - in_h - 2); | ||
in_w = min(in_w, 2 * in_width - in_w - 2); | ||
|
||
platform::CudaAtomicAdd( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atomic add is only needed for the reflected part.
const int in_h = min(in_height - 1, max(out_h - pad_top, 0)); | ||
const int in_w = min(in_width - 1, max(out_w - pad_left, 0)); | ||
|
||
platform::CudaAtomicAdd( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atomic add is only needed for the border.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thx
self.assertTrue(np.allclose(output.numpy(), np_out)) | ||
|
||
|
||
class TestPad1dClass(unittest.TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个命名需要修改
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
self.assertTrue(np.allclose(output.numpy(), np_out)) | ||
|
||
|
||
class TestPadAPI(unittest.TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
函数名需要确认下
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已统一修改
python/paddle/nn/layer/common.py
Outdated
Code Examples: | ||
.. code-block:: python | ||
import paddle | ||
import paddle.fluid as fluid |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thx
@@ -111,6 +111,7 @@ | |||
from .math import elementwise_div #DEFINE_ALIAS | |||
from .math import elementwise_floordiv #DEFINE_ALIAS | |||
from .math import elementwise_max #DEFINE_ALIAS | |||
from .math import elementwise_mul #DEFINE_ALIAS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
git conflict? rebase?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed, thx
with program_guard(Program(), Program()): | ||
input_shape = (1, 2, 3, 4, 5) | ||
pad = [1, 2, 1, 1, 3, 4] | ||
mode = "constant" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only test "constant"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add other modes unit test, thx
self.assertTrue(np.allclose(output.numpy(), np_out)) | ||
|
||
|
||
class TestPad1dClass(unittest.TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens to 2d? also, why is a separate test class needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add Test1d 2 and 3d api separately, thx
|
||
|
||
def pad(x, | ||
pad=[0, 0, 0, 0], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove default value, pad
should be required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thx
from ...fluid.layers import unsqueeze | ||
from ...fluid.layers import elementwise_mul | ||
from ...tensor import clamp | ||
from ...tensor import sum |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#DEFINE_ALIAS 都加一下吧,文档映射那边会用
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的,已添加
python/paddle/nn/layer/__init__.py
Outdated
from .common import ZeroPad2d #DEFINE_ALIAS | ||
from .common import ReplicationPad3d #DEFINE_ALIAS | ||
from .common import ConstantPad3d #DEFINE_ALIAS | ||
from .common import CosineSimilarity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#DEFINE_ALIAS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
python/paddle/nn/layer/common.py
Outdated
# [ 5. 4. 5. 6.]]]] | ||
""" | ||
|
||
def __init__(self, padding=[0, 0, 0, 0], data_format="NCHW", name=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里padding应该咩有默认值吧;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已删除,多谢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Function optimization
PR changes
APIs
Describe
Add pad op