Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[oneDNN] GRU BF16 kernel #27731

Merged
merged 2 commits into from
Oct 9, 2020
Merged

Conversation

jczaja
Copy link
Contributor

@jczaja jczaja commented Sep 29, 2020

PR types

New features

PR changes

OPs

Describe

Introducing GRU BF16 (brain float 16) kernel implemented via oneDNN library.

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@jczaja jczaja added inference Inference development BF16 Intel labels Sep 29, 2020
test=develop
@jczaja
Copy link
Contributor Author

jczaja commented Oct 1, 2020

@grygielski Could you please inspect fusion_gru_mkldnn_op.cc changes ?

"place does not support BF16 evaluation")
class TestFusionGRUBF16MKLDNNOp(OpTest):
def set_confs(self):
self.mkldnn_data_type = False
Copy link
Contributor

@wozna wozna Oct 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mkldnn_data_type attribute is most commonly used in cpu_bfloat16 passes. In the GRU kernel it is not used, but the type of this attribute is std::string. So you can set self.mkldnn_data_type = "bfloat16" or remove this assignment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But can be done later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, will do in next PR

Copy link
Contributor

@wozna wozna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jczaja
Copy link
Contributor Author

jczaja commented Oct 2, 2020

@luotao1 Could you please start your review? Your approval is needed to pass CI.

Copy link
Contributor

@arlesniak arlesniak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lidanqing-intel
Copy link
Contributor

Hi, @luotao1 This PR has only APPROVAL CI failing, please help approve. Thanks

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

0. Unittest is not allowed to be disabled.
2020-10-01 20:54:05 You must have one RD (kolinwei(Recommend), or luotao1) approval for the usage of @unittest.skip or @unittest.skipIf.

You can add a new approval for the usage of @unittest.skipIf(not core.supports_bfloat16() in tools/check_file_diff_approvals.sh and skip the original one. The one RD (jczaja, xxx, xxx, or luotao1).

Or just skip the approval for the usage of @unittest.skip or @unittest.skipIf when not core.supports_bfloat16()

@luotao1 luotao1 merged commit 606611d into PaddlePaddle:develop Oct 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BF16 inference Inference development Intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants