Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add use of global flag 'use_mkldnn' to layer_helper #26497

Conversation

arlesniak
Copy link
Contributor

@arlesniak arlesniak commented Aug 20, 2020

PR types

New features

PR changes

Others

Describe

Get use of global FLAGS_use_mkldnn to append_activation in layer_helper

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@jczaja jczaja added Intel dygraph issues related to dygraph mode labels Aug 20, 2020
@arlesniak
Copy link
Contributor Author

PR-CI-Coverage does not update status but after clicking Details it's all green

@luotao1
Copy link
Contributor

luotao1 commented Aug 27, 2020

PR-CI-Coverage does not update status but after clicking Details it's all green

Some network error. If you will fix #26497 (comment), you can push a new commit to trigger it.

@arlesniak arlesniak force-pushed the arlesniak/global_use_mkldnn_in_layer_helper branch from 3f7190d to 2fbf8a9 Compare August 27, 2020 10:10
grygielski
grygielski previously approved these changes Aug 28, 2020
Copy link
Contributor

@grygielski grygielski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1
Copy link
Contributor

luotao1 commented Aug 28, 2020

2020-08-27 19:32:07 ****************
2020-08-27 19:32:07 0. You must have one RD (zhiqiu (Recommend) or phlrain) approval for the api change for the opreator-related api without 'core.ops'.
2020-08-27 19:32:07 For more details, please click [https://github.com/PaddlePaddle/Paddle/wiki/paddle_api_development_manual.md]
2020-08-27 19:32:07 Related APIs: paddle.common_ops_import.LayerHelper.append_activation
2020-08-27 19:32:07 paddle.common_ops_import.LayerHelper.append_op
2020-08-27 19:32:07 paddle.fluid.layers.utils.LayerHelper.append_activation
2020-08-27 19:32:07 paddle.fluid.layers.utils.LayerHelper.append_op
2020-08-27 19:32:07 
2020-08-27 19:32:07 There are 1 approved errors.
2020-08-27 19:32:07 ****************

@luotao1 luotao1 requested a review from zhiqiu August 28, 2020 07:40
Comment on lines 45 to 48
op = self.main_program.current_block().append_op(*args, **kwargs)
if self.debug_:
self.appended_op = op
return op
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems appended_op is only used to get the operator for testing its attrs.
I think we don't need to add a member to LayerHelper, the first appended op (in your test case, only one op is appended) can be obtained by fluid.default_main_program().global_block().ops[0] .

Copy link
Contributor Author

@arlesniak arlesniak Aug 28, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the comment

fluid.default_main_program().global_block().ops

is empty in dygraph mode, I'm working on different solution.

Comment on lines 157 to 161
use_mkldnn = self.kwargs.get(
'use_mkldnn', core.globals().get("FLAGS_use_mkldnn", False))
if use_mkldnn:
act['use_mkldnn'] = use_mkldnn

Copy link
Contributor

@zhiqiu zhiqiu Aug 28, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it is better to implement this functionality in C++ side. There are 2 reasons,

  1. FLAGS_use_mkldnn is a GFLAG, which is declared and used in c++.
  2. There may be other way to add an activation op, for example, _append_activation_in_dygraph.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've changed tracer on the c++ side, so there is no need for the above changes anymore.

Copy link
Contributor

@grygielski grygielski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhiqiu zhiqiu self-requested a review August 31, 2020 07:59
Copy link

@wojtuss wojtuss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1
Copy link
Contributor

luotao1 commented Aug 31, 2020

@zhiqiu Please have a double review!

Comment on lines +52 to +54
if (FLAGS_use_mkldnn) {
attrs["use_mkldnn"] = true;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for dygraph mode.
I don't see handling use_mkldnn in static mode, is it already done in other PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you mean running through executor, it's already done in executor.cc:185

res2 = fluid.layers.relu(a)
self.assertTrue(np.array_equal(res1.numpy(), res2.numpy()))

def test_append_activation_in_dygraph_global_use_mkldnn(self):
Copy link
Contributor

@zhiqiu zhiqiu Aug 31, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for the tests to check the accuracy of the result when use_mkldnn is True and False.

I wonder if there are another ways that we checked the mkldnn is really used. For example, we can add some VLOG in the mkldnn kernel, and see if the log is printed when fluid.set_flags({'FLAGS_use_mkldnn': True}) is called. And we can do it manually it is difficult for unittest.
Maybe you have already done that, I just wonder that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem we had is that AFAIK there is no better way to check if mkldnn kernel has been run than to check the output of DNNL_VERBOSE for now. The other idea is to run kernel throwing Exception with extended information about type of kernel. The other problem is that AFAIK python unittest lib does not have parsing c++ stdout result out of the box. Do you have suggestion what is the best way to check what type of kernel has been run from the python level ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have suggestion what is the best way to check what type of kernel has been run from the python level

I think checking the output of DNNL_VERBOSE is Ok. You can do it in the next PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dygraph issues related to dygraph mode Intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants