Skip to content

Commit

Permalink
Implement 3D padding for ND input.
Browse files Browse the repository at this point in the history
  • Loading branch information
TE-PoornimaBiradar committed May 22, 2018
1 parent 9572780 commit 344d350
Show file tree
Hide file tree
Showing 5 changed files with 307 additions and 154 deletions.
40 changes: 35 additions & 5 deletions build-tools/code_generator/functions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1827,26 +1827,56 @@ Array Manipulation:
snake_name: pad
doc: |2
Pad arrays with specified sizes for dimensions.
Pads given N-D array with specified sizes of dimensions.
The dimensions that get padded begins with the last dimension and moves forward.
inputs:
x:
doc: N-D array
arguments:
pad_width:
doc: "A tuple of a repeation of 2 consecutive integers composed of padding sizes of after and before edges at each dimension of an input array. The padding dimensions are aligned to the last dimension of the input array. When ``pad_width=(b0, a0, b1, a1)`` and an input ``xx`` with a shape of ``(s0, s1, s2, s3)`` are given, the output ``y`` becomes ``(s0, s1, b0+s2+a0, b1+s3+a1)``."
doc: |
n-elem tuple, where n/2 <= input dimensions and n is even.
len(pad_width)/2 represents the padding dimension(e.g. 1D, 2D, 3D etc.).
(Currently padding upto 3D is supported)
type: repeated int64
default: (0,) * len(x.shape)
mode:
doc: "Padding mode is one of the following. 1) ``constant`` : elements in pad region are filled with ``constant_value``, 2) ``reflect`` : TODO, 3) ``replicate`` : padded elements are filled with the values in nearest edges."
doc: |
Padding mode is one of the following.
1) constant : Elements in pad region are filled with constant_value.
2) replicate : Padded elements are filled with the values in nearest edges.
3) reflect : Padded with the reflection of the vector mirrored on the first and last values of the vector along each axis.
(Currently only `constant` mode is supported)
type: string
default: '''constant'''
constant_value:
doc: Constant values filled in padded regions if mode is ``constant``.
doc: |
Constant values filled in padded regions if mode is `constant`.
type: float
default: 0
outputs:
y:
doc: Padded N-D array
doc: |
Padded N-D array (e.g. (B, C, H, W) shape) where dimension depends on pad_width.
ndim() of output N-D array will be same as ndim() of input N-D array.
-for 1D padding :
N-D input array with padding of the form (padLeft, padRight).
The output N-D array dimension (B, C, H, padLeft + W + padRight).
-for 2D padding :
N-D input array with padding of the form (padTop, padBottom, padLeft, padRight).
The output N-D array dimension (B, C, padTop + H + padBottom, padLeft + W + padRight).
-for 3D padding :
N-D input array with padding of the form (pasFront, padBack, padTop, padBottom, padLeft, padRight).
The output N-D array dimension (B, padFront + C + padBack, padTop + H + padBottom, padLeft + W + padRight).
Transpose:
snake_name: transpose
doc: |2
Expand Down
81 changes: 52 additions & 29 deletions include/nbla/function/pad.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.


#ifndef NBLA_FUNCTION_PAD_HPP
#define NBLA_FUNCTION_PAD_HPP

Expand All @@ -24,63 +23,87 @@ namespace nbla {

NBLA_REGISTER_FUNCTION_HEADER(Pad, const vector<int> &, const string &, float);

/** Pads given tensor with constant padding.
len(pad_width)/2 represents the padding dimension.
/** Pads given N-D array with specified sizes of dimensions.
The dimensions that get padded begins with the last dimension and moves forward.
Inputs:
- x: N-D array.
- pad_width: n-elem tuple, where n/2 ≤ input dimensions and n is even.
- mode – ‘constant’, ‘reflect’ 'edge’. Default: ‘constant’
- constant_value – fill value for ‘constant’ padding. Default: 0
- pad_width: n-elem tuple, where n/2 <= input dimensions and n is even.
len(pad_width)/2 represents the padding dimension. (e.g. 1D, 2D, 3D etc.)
(Currently padding upto 3D is supported)
- mode - Padding mode is one of the following. Default: constant.
1)constant : Elements in pad region are filled with constant_value.
2)replicate : Padded elements are filled with the values in nearest
edges.
3)reflect : Padded with the reflection of the vector mirrored on the
first and last values of the vector along each axis.
(Currently only constant mode is supported)
- constant_value - Constant values filled in padded regions if mode is constant.
Default: 0
Outputs:
- N-D array (B, C, H, W) where dimension depends on pad_width.
- Padded N-D array (e.g. (B, C, H, W) shape) where dimension depends on
pad_width.
ndim() of output N-D array will be same as ndim() of input N-D array.
for 1D padding:
3D input tensor with padding of the form (padLeft, padRight)
output tensor dimension (B, C, H, padLeft+W+padRight)
for 2D:
4D input tensor with padding of the form (padTop, padBottom, padLeft, padRight).
output tensor dimension (B, C, padTop+H+padBottom, padLeft+W+padRight)
N-D input array with padding of the form (padLeft, padRight)
output N-D array dimension (B, C, H, padLeft + W + padRight)
for 2D padding:
N-D input array with padding of the form (padTop, padBottom, padLeft,
padRight).
output N-D array dimension (B, C, padTop + H + padBottom, padLeft + W +
padRight)
for 3D padding:
N-D input array with padding of the form (pasFront, padBack, padTop,
padBottom, padLeft, padRight).
output N-D array dimension (B, padFront + C + padBack, padTop + H +
padBottom, padLeft + W + padRight)
@tparam T Data type for computation.
\ingroup FunctionImplGrp
*/

template <typename T> class Pad : public BaseFunction<const vector<int> &, const string &, float> {
template <typename T>
class Pad : public BaseFunction<const vector<int> &, const string &, float> {
protected:
const vector<int> pad_width_;
const string mode_;
float constant_value_;

public:
Pad(const Context &ctx, const vector<int> & pad_width, const string & mode, float constant_value) : BaseFunction(ctx, pad_width, mode, constant_value)
, pad_width_(pad_width)
, mode_(mode)
, constant_value_(constant_value)
{}
Pad(const Context &ctx, const vector<int> &pad_width, const string &mode,
float constant_value)
: BaseFunction(ctx, pad_width, mode, constant_value),
pad_width_(pad_width), mode_(mode), constant_value_(constant_value) {
pad_mode_["constant"] = p_constant;
pad_mode_["replicate"] = p_replicate;
pad_mode_["reflect"] = p_reflect;
}
virtual ~Pad() {}
virtual shared_ptr<Function> copy() const {
return create_Pad(ctx_, pad_width_, mode_, constant_value_);
}
virtual int min_inputs() { return 1; }
virtual int min_outputs() { return 1; }
virtual vector<dtypes> in_types() {
return vector<dtypes>{get_dtype<T>()};
}
virtual vector<dtypes> out_types() {
return vector<dtypes>{get_dtype<T>()};
}
virtual vector<dtypes> in_types() { return vector<dtypes>{get_dtype<T>()}; }
virtual vector<dtypes> out_types() { return vector<dtypes>{get_dtype<T>()}; }
virtual vector<string> allowed_array_classes() {
return SingletonManager::get<Cpu>()->array_classes();
}
virtual string name() { return "Pad"; }

typedef enum pad_mode { p_constant, p_replicate, p_reflect } pad_mode;
std::map<std::string, pad_mode> pad_mode_;

protected:
NBLA_API virtual void setup_impl(const Variables &inputs, const Variables &outputs);
NBLA_API virtual void forward_impl(const Variables &inputs, const Variables &outputs);
NBLA_API virtual void backward_impl(const Variables &inputs, const Variables &outputs,
const vector<bool> &propagate_down,
const vector<bool> &accum);
NBLA_API virtual void setup_impl(const Variables &inputs,
const Variables &outputs);
NBLA_API virtual void forward_impl(const Variables &inputs,
const Variables &outputs);
NBLA_API virtual void backward_impl(const Variables &inputs,
const Variables &outputs,
const vector<bool> &propagate_down,
const vector<bool> &accum);
};
}
#endif
6 changes: 2 additions & 4 deletions python/benchmark/function/test_pad.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@

from function_benchmark import FunctionBenchmark, Inspec


@pytest.mark.parametrize("seed", [313])
def pad_params():
inspecs = []
Expand All @@ -32,12 +33,9 @@ def pad_params():


@pytest.mark.parametrize('inspecs', pad_params())

def test_pad(inspecs, nnabla_opts):
fb = FunctionBenchmark(
F.pad, inspecs, [(10,10,10,10),'constant',0.0], {},
F.pad, inspecs, [(10, 10, 10, 10), 'constant', 0.0], {},
nnabla_opts.ext, nnabla_opts.ext_kwargs)
fb.benchmark()
fb.write(writer=nnabla_opts.function_benchmark_writer)


52 changes: 38 additions & 14 deletions python/test/function/test_pad.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,28 +21,52 @@
ctxs = list_context('Pad')


def ref_pad(x, pad_with,mode,constant_value):

def ref_pad(x, pad_with, mode, constant_value):
pair_len = int(len(pad_with)/2)
pad_list = [(0,0)] * (len(x.shape)-pair_len)

pad_list = [(0, 0)] * (len(x.shape)-pair_len)
pad_list.extend([(a, b) for a, b in zip(pad_with[:-1:2], pad_with[1::2])])
new_pad = tuple(pad_list)
ret = np.pad(x, new_pad, mode, constant_values=constant_value)

ret = np.pad(x, new_pad, mode, constant_values=constant_value)
return ret


@pytest.mark.parametrize("ctx, func_name", ctxs)
@pytest.mark.parametrize("inshape", [(2,2,2,2),(3,5,7,1),(2,3,4,6),(1,2,3,2),(1,1,4,5),(1,2,2,2),(1,1,4,4),(1,3,5,7),(1,1,5,9)])
@pytest.mark.parametrize("pad_with", [(2,2),(2,3),(1,1),(1,1,1,1),(3,3,3,3),(2,3,3,1)])
@pytest.mark.parametrize("mode, constant_value", [('constant',0.0),('constant',0.2),('constant',5.5),('constant',-0.1)])

@pytest.mark.parametrize("seed", [313])
def test_pad_forward_backward(seed, inshape, pad_with,mode,constant_value, ctx, func_name):
def pad_forward_backward(seed, inshape, pad_with, mode, constant_value, ctx, func_name):
from nbla_test_utils import function_tester
rng = np.random.RandomState(seed)
# Generate ND inputs
i = rng.randn(*inshape).astype(np.float32)
inputs = [i]
function_tester(rng, F.pad, ref_pad, inputs, func_args=[pad_with,mode,constant_value],ctx=ctx, func_name=func_name, dstep=1e-1)
inputs_nd = [i]

function_tester(rng, F.pad, ref_pad, inputs_nd, func_args=[
pad_with, mode, constant_value], ctx=ctx, func_name=func_name, dstep=1e-1)


@pytest.mark.parametrize("ctx, func_name", ctxs)
@pytest.mark.parametrize("inshape_1d", [(4,), (5,)])
@pytest.mark.parametrize("pad_with_1d", [(2, 2), (1, 1), (2, 3)])
@pytest.mark.parametrize("mode, constant_value", [('constant', 0.0), ('constant', 0.2), ('constant', 5.5), ('constant', -0.1)])
@pytest.mark.parametrize("seed", [313])
def test_pad_forward_backward_1D(seed, inshape_1d, pad_with_1d, mode, constant_value, ctx, func_name):
pad_forward_backward(seed, inshape_1d, pad_with_1d,
mode, constant_value, ctx, func_name)


@pytest.mark.parametrize("ctx, func_name", ctxs)
@pytest.mark.parametrize("inshape_2d", [(4, 4), (5, 5), (2, 3)])
@pytest.mark.parametrize("pad_with_2d", [(2, 2), (1, 1), (2, 3), (2, 2, 2, 2), (3, 3, 3, 3), (2, 3, 3, 4)])
@pytest.mark.parametrize("mode, constant_value", [('constant', 0.0), ('constant', 0.2), ('constant', 5.5), ('constant', -0.1)])
@pytest.mark.parametrize("seed", [313])
def test_pad_forward_backward_2D(seed, inshape_2d, pad_with_2d, mode, constant_value, ctx, func_name):
pad_forward_backward(seed, inshape_2d, pad_with_2d,
mode, constant_value, ctx, func_name)


@pytest.mark.parametrize("ctx, func_name", ctxs)
@pytest.mark.parametrize("inshape_Nd", [(2, 2, 2), (3, 5, 7), (2, 3, 2), (2, 2, 2, 2), (3, 5, 7, 1), (2, 3, 4, 6), (2, 2, 2, 2, 2), (3, 1, 5, 7, 3), (2, 3, 1, 1, 4, 5), (2, 2, 2, 2, 2, 2), (3, 3, 1, 5, 7, 3), (2, 2, 3, 1, 1, 4, 5)])
@pytest.mark.parametrize("pad_with_3d", [(2, 2), (1, 1), (2, 3), (2, 2, 2, 2), (3, 3, 3, 3), (2, 3, 3, 4), (2, 2, 2, 2, 2, 2), (3, 3, 3, 3, 3, 3), (2, 3, 2, 3, 3, 4)])
@pytest.mark.parametrize("mode, constant_value", [('constant', 0.0), ('constant', 0.2), ('constant', 5.5), ('constant', -0.1)])
@pytest.mark.parametrize("seed", [313])
def test_pad_forward_backward_Nd(seed, inshape_Nd, pad_with_3d, mode, constant_value, ctx, func_name):
pad_forward_backward(seed, inshape_Nd, pad_with_3d,
mode, constant_value, ctx, func_name)

0 comments on commit 344d350

Please sign in to comment.