-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
INTEL MKL: Enhance MkL BatchNorm ops with primitive reuse #19402
INTEL MKL: Enhance MkL BatchNorm ops with primitive reuse #19402
Conversation
Close temporarily - pending conv_bwd PR |
mkl_conv_ops.cc has been reverted to avoid any review confusion. Thanks |
Pending on #19754 |
Reopen since PR #19399 has been merged |
So there's good news and bad news. 👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there. 😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request. Note to project maintainer: This is a terminal state, meaning the |
Yes, i have already singed CLA, and should be ok to contribute. |
Hi, |
@gzmkl I think many of my comments for PR 19403 would apply to this as well, please modify accordingly. |
A Googler has manually verified that the CLAs look good. (Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.) |
Plan to apply PR 19403 comments on this PR too. |
So there's good news and bad news. 👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there. 😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request. Note to project maintainer: This is a terminal state, meaning the |
Latest code change based on PR 19403 code review suggestions. |
@gzmkl Thanks for the update. Let's try to get PR 19403 merged first before we push the remaining PRs. Running tests for it again now. |
(*diff_src_tensor)->flat<T>().data()[i] = 0; | ||
int num_elements = (*diff_src_tensor)->shape().num_elements(); | ||
auto diff_src_data = (*diff_src_tensor)->flat<T>().data(); | ||
for (size_t i = 0; i < num_elements; i++) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would use std::fill instead of a loop. Same everywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I will do.
There are places in files which are not changed in this PR. Should I do the same change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do. :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have done code refactoring for multiple places in mkl_fused_batch_norm_op.cc.
To avoid potential merge conflicts, I did not apply this code change recommendation to
source files not related to this PR.
We will include this suggestion, along with others (such as changing signature of Execute()
with proper const or non-const argument declarations), to create a separate "code clean up"
PR.
Thanks!
@gzmkl resolved conflict |
A Googler has manually verified that the CLAs look good. (Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.) |
…smus's suggestion
So there's good news and bad news. 👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there. 😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request. Note to project maintainer: This is a terminal state, meaning the |
A Googler has manually verified that the CLAs look good. (Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.) |
Hi Rasmus, Please choose "master" version to address the following conflict in mkl_util.h <<<<<<< primreuse_batch_norm
|
Please take the branch code with the following conflict
|
PiperOrigin-RevId: 207737829
@@ -262,6 +262,7 @@ class MklFusedBatchNormOp : public OpKernel { | |||
} | |||
|
|||
void MklCreateInputLayout(OpKernelContext* context) { | |||
const Tensor& input = MklGetInput(context, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, why do we have this line? This local variable isn't used anywhere in the function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should not be there. See more comment below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
@@ -544,6 +545,7 @@ class MklFusedBatchNormGradOp : public OpKernel { | |||
} | |||
|
|||
void MklCreateInputLayout(OpKernelContext* context) { | |||
const Tensor& input = MklGetInput(context, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This local variable also isn't used anywhere in this function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch!
Yes, this line of code should be there. And it was in a function with MKL-ML integration which will be removed in the long run since we have MKL-DNN integration.
I will clean up this code, after all primitive reuse PRs have been done (only Relu one remains). I have a TODO list based on Rasmus's suggestions, which were applied only to individual PRs (thus
only on related changed files), and will have a "clean-up" PR.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your clarification!
Enable MKL BatchNorm ops with primitive reuse, to improve
(1) model training and
(2) inference of small batch size
by minimizing primitive creation time.
************ Notes *******************
Please review and merge this PR first
#19399