-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add use of global flag 'use_mkldnn' to layer_helper #26497
Add use of global flag 'use_mkldnn' to layer_helper #26497
Conversation
Thanks for your contribution! |
PR-CI-Coverage does not update status but after clicking Details it's all green |
Some network error. If you will fix #26497 (comment), you can push a new commit to trigger it. |
3f7190d
to
2fbf8a9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
python/paddle/fluid/layer_helper.py
Outdated
op = self.main_program.current_block().append_op(*args, **kwargs) | ||
if self.debug_: | ||
self.appended_op = op | ||
return op |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems appended_op
is only used to get the operator for testing its attrs.
I think we don't need to add a member to LayerHelper, the first appended op (in your test case, only one op is appended) can be obtained by fluid.default_main_program().global_block().ops[0]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the comment
fluid.default_main_program().global_block().ops
is empty in dygraph mode, I'm working on different solution.
python/paddle/fluid/layer_helper.py
Outdated
use_mkldnn = self.kwargs.get( | ||
'use_mkldnn', core.globals().get("FLAGS_use_mkldnn", False)) | ||
if use_mkldnn: | ||
act['use_mkldnn'] = use_mkldnn | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it is better to implement this functionality in C++ side. There are 2 reasons,
FLAGS_use_mkldnn
is a GFLAG, which is declared and used in c++.- There may be other way to add an activation op, for example,
_append_activation_in_dygraph
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed tracer on the c++ side, so there is no need for the above changes anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@zhiqiu Please have a double review! |
if (FLAGS_use_mkldnn) { | ||
attrs["use_mkldnn"] = true; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for dygraph mode.
I don't see handling use_mkldnn
in static mode, is it already done in other PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you mean running through executor, it's already done in executor.cc:185
res2 = fluid.layers.relu(a) | ||
self.assertTrue(np.array_equal(res1.numpy(), res2.numpy())) | ||
|
||
def test_append_activation_in_dygraph_global_use_mkldnn(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for the tests to check the accuracy of the result when use_mkldnn
is True and False.
I wonder if there are another ways that we checked the mkldnn
is really used. For example, we can add some VLOG in the mkldnn kernel, and see if the log is printed when fluid.set_flags({'FLAGS_use_mkldnn': True})
is called. And we can do it manually it is difficult for unittest.
Maybe you have already done that, I just wonder that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem we had is that AFAIK there is no better way to check if mkldnn kernel has been run than to check the output of DNNL_VERBOSE for now. The other idea is to run kernel throwing Exception with extended information about type of kernel. The other problem is that AFAIK python unittest lib does not have parsing c++ stdout result out of the box. Do you have suggestion what is the best way to check what type of kernel has been run from the python level ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have suggestion what is the best way to check what type of kernel has been run from the python level
I think checking the output of DNNL_VERBOSE is Ok. You can do it in the next PR.
PR types
New features
PR changes
Others
Describe
Get use of global FLAGS_use_mkldnn to append_activation in layer_helper