Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

My unpool max 2d #5826

Merged
merged 38 commits into from
Nov 29, 2017
Merged

Conversation

sweetsky0901
Copy link
Contributor

resolve #5787
当前只实现了unpool的2d的max


class Unpool2dOpMaker : public framework::OpProtoAndCheckerMaker {
public:
Unpool2dOpMaker(framework::OpProto* proto, \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that we don't need the trailing \ here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

表示连起来是一行来着。好吧,我删了它

"the number of channels, H and W is the height and "
"width of feature.");
AddAttr<std::vector<int>>("ksize",
"(vector ), the unpooling window size(height, width) "
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(vector ) => (vector)

size(height => size (height

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"of unpooling operator.");
AddAttr<std::vector<int>>("strides",
"(vector, default:{1, 1}), "
"strides(height, width) of unpooling operator.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

strides(height => strides (height

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

framework::OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor) The input tensor of unpool operator. "
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure should we have variable type in the parenthesis? Is this a convention?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是否有这项约定我不太确定从哪里看约定。
但是看了几个op都是这样写的

AddAttr<std::string>("unpoolingtype",
"(string), unpooling type, can be \"max\" for max-unpooling ")
.InEnum({"max"});
AddComment(R"DOC(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd thought that we no longer need that many "'s in C++'s DOC-string.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

namespace paddle {
namespace operators {

using Tensor = framework::Tensor;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the using in a header file might break existing namespace hierarchy if we include that header file (https://google.github.io/styleguide/cppguide.html#Namespaces), usually we don't do that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

抱歉,对这个的确没有太注意,这里也是copy了其他的op的写法。我把它去掉吧

set_zero(context.device_context(), out, static_cast<T>(0));
}
switch (ksize.size()) {
case 2: {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is only one case in this switch? It seems that we should use if instead of switch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为开始考虑要后面继续加3d的来着,ok,我先暂时去掉吧

zero(device_ctx, in_x_grad, static_cast<T>(0.0));
}
switch (ksize.size()) {
case 2: {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like in the above comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为开始考虑要后面继续加3d的来着,ok,我先暂时去掉吧

std::vector<int> paddings = ctx->Attrs().Get<std::vector<int>>("paddings");

PADDLE_ENFORCE(in_x_dims.size() == 4,
"Unpooling intput should be 4-D.");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be 4-D => must be of 4-dimensional

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

math::SetConstant<Place, T> zero;
if (in_x_grad) {
in_x_grad->mutable_data<T>(context.GetPlace());
zero(device_ctx, in_x_grad, static_cast<T>(0.0));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0.0 => 0, in the case that T is int.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor Author

@sweetsky0901 sweetsky0901 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done thanks

zero(device_ctx, in_x_grad, static_cast<T>(0.0));
}
switch (ksize.size()) {
case 2: {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为开始考虑要后面继续加3d的来着,ok,我先暂时去掉吧

math::SetConstant<Place, T> zero;
if (in_x_grad) {
in_x_grad->mutable_data<T>(context.GetPlace());
zero(device_ctx, in_x_grad, static_cast<T>(0.0));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

set_zero(context.device_context(), out, static_cast<T>(0));
}
switch (ksize.size()) {
case 2: {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为开始考虑要后面继续加3d的来着,ok,我先暂时去掉吧

namespace paddle {
namespace operators {

using Tensor = framework::Tensor;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

抱歉,对这个的确没有太注意,这里也是copy了其他的op的写法。我把它去掉吧

std::vector<int> paddings = ctx->Attrs().Get<std::vector<int>>("paddings");

PADDLE_ENFORCE(in_x_dims.size() == 4,
"Unpooling intput should be 4-D.");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"the number of channels, H and W is the height and "
"width of feature.");
AddAttr<std::vector<int>>("ksize",
"(vector ), the unpooling window size(height, width) "
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


class Unpool2dOpMaker : public framework::OpProtoAndCheckerMaker {
public:
Unpool2dOpMaker(framework::OpProto* proto, \
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

表示连起来是一行来着。好吧,我删了它

framework::OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor) The input tensor of unpool operator. "
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是否有这项约定我不太确定从哪里看约定。
但是看了几个op都是这样写的

AddAttr<std::string>("unpoolingtype",
"(string), unpooling type, can be \"max\" for max-unpooling ")
.InEnum({"max"});
AddComment(R"DOC(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

PADDLE_ENFORCE(ctx->HasInput("Y"), "Input(Y) of UnpoolOp"
"should not be null.");
PADDLE_ENFORCE(ctx->HasOutput("Out"),
"Output(Out) of UnpoolOp should not be null.");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里没有很好的理解应该在什么地方还加空格,sorry

@qingqing01 qingqing01 mentioned this pull request Nov 24, 2017
#pragma once
#include "paddle/framework/tensor.h"
#include "paddle/platform/device_context.h"
#include "paddle/platform/hostdevice.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this unused header file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const framework::Tensor& indices,
framework::Tensor * input_grad,
const framework::Tensor& output,
const framework::Tensor& output_grad);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put the input_grad as the last parameter.

https://google.github.io/styleguide/cppguide.html#Function_Parameter_Ordering

When defining a function, parameter order is: inputs, then outputs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

for (int i = 0; i < input_feasize; ++i) {
int index = indices_data[i];
PADDLE_ENFORCE(index < output_feasize, "err index in unpooling!");
output_data[index] = input_data[i];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里为什么不是 output_data[index] += input_data[i];

另外,Unpool2dMaxFunctor可以不用写,直接复用MaxPool2dWithIndexGradFunctor就行。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样会不会让代码结构上显得有点乱呢,我不太建议这么弄

* All tensors are in NCHW format.
*/
template <typename T>
class Unpool2dMaxGradFunctor<platform::GPUPlace, T> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同CPU,也可以复用MaxPool2dWithIndexGradFunctor,可以去掉 Unpool2dMaxFunctor。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样会不会让代码结构上显得有点乱呢,我不太建议这么弄

"(vector defalut:{0,0}), "
"paddings (height, width) of unpooling operator.")
.SetDefault({0, 0});
AddAttr<std::string>("unpoolingtype",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unpoolingtype -> unpooling_type

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor Author

@sweetsky0901 sweetsky0901 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done thanks

for (int i = 0; i < input_feasize; ++i) {
int index = indices_data[i];
PADDLE_ENFORCE(index < output_feasize, "err index in unpooling!");
output_data[index] = input_data[i];
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样会不会让代码结构上显得有点乱呢,我不太建议这么弄

* All tensors are in NCHW format.
*/
template <typename T>
class Unpool2dMaxGradFunctor<platform::GPUPlace, T> {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样会不会让代码结构上显得有点乱呢,我不太建议这么弄

#pragma once
#include "paddle/framework/tensor.h"
#include "paddle/platform/device_context.h"
#include "paddle/platform/hostdevice.h"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const framework::Tensor& indices,
framework::Tensor * input_grad,
const framework::Tensor& output,
const framework::Tensor& output_grad);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"(vector defalut:{0,0}), "
"paddings (height, width) of unpooling operator.")
.SetDefault({0, 0});
AddAttr<std::string>("unpoolingtype",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int input_feasize = input_height * input_width;
int output_feasize = output_height * output_width;
const T* input_data = input.data<T>();
const T * indices_data = indices.data<T>();
Copy link
Contributor

@NHZlX NHZlX Nov 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You specify indices type to int16 in https://github.com/sweetsky0901/Paddle/blob/27cf7f3376e2005522dbba8c964cb2e449b46380/python/paddle/v2/fluid/tests/test_unpool_op.py#L56 so, there must be wrong using const T * indices_data = indices.data<T>(); here, also, the paddle does not support int16 currently. So, you must specify int32 when test, and use const int * indices_data = indices.data<int>(); or add a new parameter type to the template, like here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done
系统默认对输入类型做了检查,必须一样,如果不一样,需要重写
protected:
framework::OpKernelType GetKernelType(
const framework::ExecutionContext& ctx) const override {
return framework::OpKernelType(
framework::ToDataType(ctx.Inputframework::Tensor("X")->type()),
ctx.device_context());
}
之前为了逃避这个检查,用float传递的index,然后到具体计算再强转的。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不是这样的
在注册Op的时候一般注册两个(double/float),那么在op运行的时候到底是用哪个呢(double的还是float),这时候需要调用GetKernelType函数来判断,如果X是float,那就调float的Op,如果是double,那就调double的Op

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请问这些有文档吗?

Copy link
Contributor Author

@sweetsky0901 sweetsky0901 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done thanks

int input_feasize = input_height * input_width;
int output_feasize = output_height * output_width;
const T* input_data = input.data<T>();
const T * indices_data = indices.data<T>();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done
系统默认对输入类型做了检查,必须一样,如果不一样,需要重写
protected:
framework::OpKernelType GetKernelType(
const framework::ExecutionContext& ctx) const override {
return framework::OpKernelType(
framework::ToDataType(ctx.Inputframework::Tensor("X")->type()),
ctx.device_context());
}
之前为了逃避这个检查,用float传递的index,然后到具体计算再强转的。

for (int c = 0; c < output_channels; ++c) {
for (int i = 0; i < input_feasize; ++i) {
int index = indices_data[i];
PADDLE_ENFORCE(index < output_feasize, "err index in unpooling!");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个可以写在循环外面,将indices转为EigenMatrix或者EigenVector,之后调用maximum方法返回indices中最大元素,然后判断是否越界。
CUDA实现也可以这样做,避免在cuda kernel里面做越界检测。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

调用maximum实际上又多了很多次比较,不太赞成这样的写法

const int channels,
T* output_data,
const int output_height,
const int output_width) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please note the order of parameters.

"(Tensor) The input tensor of unpool operator. "
"The format of input tensor is NCHW. Where N is batch size, C is the "
"number of channels, H and W is the height and width of feature.");
AddInput("Y",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the name of the variable is intuitive. It can be changed to Masks or Indices.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with @chengduoZH

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}
};

int OutputSize(int input_size, int ksize, int padding, int stride) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add inline.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

暂时不改


PADDLE_ENFORCE(in_x_dims.size() == 4,
"Unpooling intput must be of 4-dimensional.");
for (int i = 0; i < 4; ++i) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can directly use "==" operator to compare in_x_dims and in_y_dims .

bool operator==(const Dim<i>& o) const {
return (head == o.head) && (tail == o.tail);
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

期待等能有一些关键数据类型的一些说明文档之后批量的改

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PADDLE_ENFORCE_EQ(in_x_dims, in_y_dims);

即可。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

math::SetConstant<Place, T> zero;
if (in_x_grad) {
in_x_grad->mutable_data<T>(context.GetPlace());
zero(device_ctx, in_x_grad, static_cast<T>(0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can directly use this function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

现在这么写,是不是也可以?

context.Input<framework::Tensor>(framework::GradVarName("Out"));
framework::Tensor* in_x_grad =
context.Output<framework::Tensor>(framework::GradVarName("X"));
std::string unpooling_type = context.Attr<std::string>("unpoolingtype");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unpoolingtype -> unpooling_type

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

in_x_grad->mutable_data<T>(context.GetPlace());
zero(device_ctx, in_x_grad, static_cast<T>(0));
}
math::Unpool2dMaxGradFunctor<Place, T, T2> unpool2d_max_backward;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pooling 有maxPool, avgPool,当前Unpooling只有maxPool的上采样,是否应该也加上avgPool的上采样 @qingqing01 @NHZlX

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

暂时可以不用考虑 avg 的吧,有需求的时候再进行添加

namespace operators {
namespace math {

template <typename Place, typename T, typename T2>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typename T, typename T2 - > typename T1, typename T2

hidx = (index - index % out_W) / out_W
widx = index % out_W
out[nidx, cidx, int(hidx), int(widx)] = \
input[nidx, cidx, h, w]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我觉得python代码可以写的更简单一点

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为是测试代码,就先以准确就好了

Copy link
Contributor Author

@sweetsky0901 sweetsky0901 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int input_feasize = input_height * input_width;
int output_feasize = output_height * output_width;
const T* input_data = input.data<T>();
const T * indices_data = indices.data<T>();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请问这些有文档吗?

for (int c = 0; c < output_channels; ++c) {
for (int i = 0; i < input_feasize; ++i) {
int index = indices_data[i];
PADDLE_ENFORCE(index < output_feasize, "err index in unpooling!");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

调用maximum实际上又多了很多次比较,不太赞成这样的写法

}
};

int OutputSize(int input_size, int ksize, int padding, int stride) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

暂时不改


PADDLE_ENFORCE(in_x_dims.size() == 4,
"Unpooling intput must be of 4-dimensional.");
for (int i = 0; i < 4; ++i) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

期待等能有一些关键数据类型的一些说明文档之后批量的改

if (output_data) {
math::SetConstant<Place, T> set_zero;
set_zero(context.device_context(), out, static_cast<T>(0));
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

习惯对指针都加判断了,按道理应该判断!的,然后抛异常。

context.Input<framework::Tensor>(framework::GradVarName("Out"));
framework::Tensor* in_x_grad =
context.Output<framework::Tensor>(framework::GradVarName("X"));
std::string unpooling_type = context.Attr<std::string>("unpoolingtype");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

math::SetConstant<Place, T> zero;
if (in_x_grad) {
in_x_grad->mutable_data<T>(context.GetPlace());
zero(device_ctx, in_x_grad, static_cast<T>(0));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

现在这么写,是不是也可以?

hidx = (index - index % out_W) / out_W
widx = index % out_W
out[nidx, cidx, int(hidx), int(widx)] = \
input[nidx, cidx, h, w]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为是测试代码,就先以准确就好了

ksize – Size of the max pooling window.
stride – Stride of the max pooling window.
"It is set to kernel_size by default.
padding – Padding that was added to the input"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line 41 - line 48已经解释了ksize, stride, padding的含义,这里不用重复解释。Doc里可以给出参数文献,output shape的计算公式等。

比如参考 PyTorch: http://pytorch.org/docs/master/nn.html?highlight=unpool#torch.nn.MaxUnpool2d

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


PADDLE_ENFORCE(in_x_dims.size() == 4,
"Unpooling intput must be of 4-dimensional.");
for (int i = 0; i < 4; ++i) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PADDLE_ENFORCE_EQ(in_x_dims, in_y_dims);

即可。

int bsize = input_height * input_width * channels;
int csize = input_height * input_width;
int out_bsize = output_height * output_width * channels;
int out_csize = output_height * output_width;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感觉命名有点随意。

int in_c_stride = input_height * input_width;
int in_n_stride = in_c_stride * channels;
int out_c_stride = output_height * output_width;
int out_n_stride = out_c_stride * channels;

数据是NCHW,这里就用了n_stride, c_stride,其他更好的命名也可以。但bsize, csize等觉得有点随意。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int cidx = boffset / csize;
int out_offset = bidx * out_bsize + cidx * out_csize;
int out_index = indices_data[i];
PADDLE_ASSERT(out_index < (output_height * output_width));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PADDLE_ASSERT(out_index < out_c_stride);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int bsize = input_height * input_width * channels;
int csize = input_height * input_width;
int out_bsize = output_height * output_width * channels;
int out_csize = output_height * output_width;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"(Tensor) The input tensor of unpool operator. "
"The format of input tensor is NCHW. Where N is batch size, C is the "
"number of channels, H and W is the height and width of feature.");
AddInput("Y",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with @chengduoZH

distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不要有缩进,参考 paddle/operators/accuracy_op.h

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

def unpool2dmax_forward_naive(input, indices, ksize, strides, paddings):
s0, s1, s2, s3 = input.shape
out_H=(s2 - 1) * strides[0] - 2 * paddings[0] + ksize[0]
out_W=(s2 - 1) * strides[1] - 2 * paddings[1] + ksize[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请安装格式检查工具!!!( 上个PR就说过),这里的格式不符合Python格式。
= 前后有空格。

另外,python里不要大小写混合,这个python文件里好几处,请一并修改。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我已经装了这个工具了!!!!!但是就是没检查出来。
done

arg = x_masked.argmax()
indices[nidx, cidx, i, j] = \
(r_start + arg / self.ksize[1]) * W + \
c_start + arg % self.ksize[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里不用自己写max-pooling的forward,可以调用python/paddle/v2/fluid/tests/test_pool_max_op.py里的函数max_pool2D_forward_naive即可。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那单测文件不是互相依赖了吗?

indices[nidx, cidx, i, j] = \
(r_start + arg / self.ksize[1]) * W + \
c_start + arg % self.ksize[1]
output = self.Unpool2d_forward_naive(input, indices, self.ksize, \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.Unpool2d_forward_naive -> unpool2d_forward_naive

remove line 72.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是为了准备将来写avg的测试兼容用的

Copy link
Contributor Author

@sweetsky0901 sweetsky0901 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int bsize = input_height * input_width * channels;
int csize = input_height * input_width;
int out_bsize = output_height * output_width * channels;
int out_csize = output_height * output_width;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int cidx = boffset / csize;
int out_offset = bidx * out_bsize + cidx * out_csize;
int out_index = indices_data[i];
PADDLE_ASSERT(out_index < (output_height * output_width));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int bsize = input_height * input_width * channels;
int csize = input_height * input_width;
int out_bsize = output_height * output_width * channels;
int out_csize = output_height * output_width;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int cidx = boffset / csize;
int out_offset = bidx * out_bsize + cidx * out_csize;
int out_index = indices_data[i];
PADDLE_ASSERT(out_index < (output_height * output_width));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const T2 * indices_data = indices.data<T2>();
T* output_data = output->mutable_data<T>(context.GetPlace());
int nthreads = batch_size * output_channels * input_height * input_width;
int blocks = (nthreads + 1024 - 1) / 1024;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


PADDLE_ENFORCE(in_x_dims.size() == 4,
"Unpooling intput must be of 4-dimensional.");
for (int i = 0; i < 4; ++i) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

def unpool2dmax_forward_naive(input, indices, ksize, strides, paddings):
s0, s1, s2, s3 = input.shape
out_H=(s2 - 1) * strides[0] - 2 * paddings[0] + ksize[0]
out_W=(s2 - 1) * strides[1] - 2 * paddings[1] + ksize[1]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我已经装了这个工具了!!!!!但是就是没检查出来。
done

indices[nidx, cidx, i, j] = \
(r_start + arg / self.ksize[1]) * W + \
c_start + arg % self.ksize[1]
output = self.Unpool2d_forward_naive(input, indices, self.ksize, \
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是为了准备将来写avg的测试兼容用的

arg = x_masked.argmax()
indices[nidx, cidx, i, j] = \
(r_start + arg / self.ksize[1]) * W + \
c_start + arg % self.ksize[1]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那单测文件不是互相依赖了吗?

Copy link
Contributor

@qingqing01 qingqing01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

希望下个PR继续完成对3D的支持,并修改这个PR遗留问题。

arg = x_masked.argmax()
indices[nidx, cidx, i, j] = \
(r_start + arg / self.ksize[1]) * wsize + \
c_start + arg % self.ksize[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里max-pooling的计算,请复用 paddle/v2/fluid/tests/test_pool_max_op.py 或 paddle/v2/fluid/tests/test_pool2d_op.py的代码。

@sweetsky0901 sweetsky0901 merged commit 966a6ce into PaddlePaddle:develop Nov 29, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unpooling Operator.
5 participants