Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local response normalize. #4426

Merged
merged 20 commits into from
Oct 26, 2017
Merged

Conversation

gongweibao
Copy link
Contributor

Fix #4425

@gongweibao gongweibao closed this Sep 27, 2017
@gongweibao gongweibao changed the title [WIP]Local response normalize. Local response normalize. Sep 28, 2017
public:
LRNOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The first input of lrn op");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I think only "the first/second input of X" is not a good comment.
#4314

fc op has a good comment style. https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/fc_op.cc#L123

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@lcy-seso
Copy link
Contributor

The comments still do not follow our comment style in our doc: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/name_convention.md

Please take fc_op as an example.

PADDLE_ENFORCE_EQ(x_dim.size(), 4, "Input(X)'rank of LRNOp should be 4.");

ctx.Output<Tensor>("Out")->Resize(x_dim);
ctx.Output<Tensor>("mid_out")->Resize(x_dim);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update to the latest code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

)DOC");

AddOutput("Out", "(Tensor)The output of lrn op");
AddOutput("mid_out", R"Doc(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", R"DOC(
(Tensor)Input of lrn op.It must be a 4 rank tenor with NCHW format.
)DOC");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Tensor) The input of LRN operator. It must be a 4D tenor with NCHW format.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

(Tensor)Input of lrn op.It must be a 4 rank tenor with NCHW format.
)DOC");

AddOutput("Out", "(Tensor)The output of lrn op");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Tensor) The output of LRN operator, which is also the 4D tensor with NCHW format.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


auto x_dims = ctx.Input<Tensor>("X")->dims();
auto *x_g = ctx.Output<framework::Tensor>(framework::GradVarName("X"));
x_g->Resize(x_dims);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update to the latest code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

and also used in backward process.
)Doc");

AddAttr<int>("n", R"DOC(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

n -> size

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这是公式里边的变量,是不是按照公式的来更好?

}
}
}
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GPU的实现, 最好复用kernel: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/function/CrossMapNormalOpGpu.cu

现在这样的循环对于GPU来说效率低。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我增加了一个ISSUE,并且写到了layer port中,新开PR解决这个问题吧!
#5066

.SetDefault(0.0001)
.GreaterThan(0.0);

AddAttr<float>("beta", R"DOC(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alpha, beta, k这些float类型的参数,觉得最好也用模板来写,可以参考:https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/dropout_op.cc#L40

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

self.check_output()

def test_check_grad_normal(self):
self.check_grad(['X'], 'Out', max_relative_error=0.12)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果max_relative_error 过大,可以尝试double测试~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@gongweibao gongweibao merged commit 9d142d5 into PaddlePaddle:develop Oct 26, 2017
@gongweibao gongweibao deleted the cross_map_norm branch October 26, 2017 12:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants