Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MKLDNN layout: Support for activation operator #11124

Merged

Conversation

mozga-intel
Copy link
Contributor

Waiting for supports of the mkldnn’s layout.

Please have a look at this activation operator supported by the MKLDNN’s layout and assess if we can keep this design of the code.

This code uses the implementation of layout which is available in the last pull-request. Therefore some of the function in this code are not available in this pull-request, but are available here

This version of code can be merged into the main branch when the pull-request with layout is accepted.

The concept of splits of the long code was proposed by @luotao1

Pull-request is related to #11040

@mozga-intel mozga-intel requested a review from luotao1 June 1, 2018 10:02
@mozga-intel mozga-intel mentioned this pull request Jun 1, 2018
@mozga-intel mozga-intel force-pushed the mozga-intel/Activation_mkldnn_layout branch from 8fac6b8 to 2ba42fd Compare June 1, 2018 10:43
@mozga-intel mozga-intel force-pushed the mozga-intel/Activation_mkldnn_layout branch 3 times, most recently from 5150a23 to 5f8cb3d Compare June 12, 2018 07:57
@mozga-intel
Copy link
Contributor Author

@luotao1, @tensor-tang The code is prepared to the code-review process.

@mozga-intel
Copy link
Contributor Author

"For some kinds of element wise operations (namely ReLU with alpha parameter equals 0) dst and src can be interchangeable for the backward pass, which allows performing in-place forward even for training."

@mozga-intel
Copy link
Contributor Author

We have the "out-of-place" implementation of code.

// save input data to be referred in backward path
auto p_src_data = std::make_shared<const T *>(src_data);
dev_ctx.SetBlob(key_src_data, p_src_data);
std::shared_ptr<memory> dst_memory;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please try to use unique_ptr next time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I will change it, next time.

@@ -82,6 +83,7 @@ class ActivationOp : public framework::OperatorWithKernel {
ctx->ShareLoD("X", /*->*/ "Out");
}

protected:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why protect them?

Copy link
Contributor Author

@mozga-intel mozga-intel Jun 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everyplace, where we have the GetExpectedKernelType function we use the protected keyword. This the only one reason, why I used it.


template <typename T, mkldnn::algorithm algorithm>
struct MKLDNNActivationFunc : public BaseActivationFunctor<T> {
template <typename ExecContext>
void operator()(const ExecContext &ctx) const {
void operator()(const framework::ExecutionContext &ctx) const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove the typename?

Copy link
Contributor Author

@mozga-intel mozga-intel Jun 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because, I did't need to have it - redundant code.

@mozga-intel mozga-intel force-pushed the mozga-intel/Activation_mkldnn_layout branch from 5f8cb3d to 792d3b2 Compare June 18, 2018 13:42
@mozga-intel
Copy link
Contributor Author

@luotao1 Could you continue the checking of this code, please.

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!Thanks very much!

@luotao1 luotao1 merged commit 49f23e6 into PaddlePaddle:develop Jun 19, 2018
@luotao1 luotao1 moved this from Doing to Done in Intel Optimization on Fluid Jun 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging this pull request may close these issues.

None yet

3 participants