Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DNNL][Relay extern-schedule] DNNL Conv2D Kernel enable by assigning "-libs=mkldnn" #11571

Merged
merged 15 commits into from
Jun 10, 2022

Conversation

Qianshui-Jiang
Copy link
Contributor

This PR mainly about mapping oneDNN OP implementation in X86 Relay Op Strategy. we've observed that nn.dense kernel that could be dispatched to DNNL by assigning "-libs=mkldnn" and there is also conv2d kernel implemented in runtime/contrib/dnnl.

so we mapping it in X86 Relay Op Strategy and optimized the kernel implementation to let DNNL choose blocked format according to different input shape, as performance-profiling example discribed in oneDNN doc.

Here is the details:

  • Adjust DNNL Conv2D implementation to let it support NHWC format and automate reorder for input/weights/outputs to abtain best performace.
  • Add 'target.libs=mkldnn' branch in Relay X86 OP strategy for both NCHW and NHWC Conv2D kernel.
  • Add 2 test funtions for case mentioned above.

We are trying to enable more DNNL kernels including different format and datatypes this way.

@Qianshui-Jiang
Copy link
Contributor Author

@comaniac @AndrewZhaoLuo @junrushao1994 @masahi could you pls help review and have some suggestions on how to organized the file to USE_MKLDNN not only for cblas ? should the test case added in test_dnnl.py?

@comaniac
Copy link
Contributor

comaniac commented Jun 4, 2022

Will take a look early next week.

@Qianshui-Jiang
Copy link
Contributor Author

@yangulei thanks for your suggesttion ! : ) and I've added the check for registed external function in test case, seems it works~

out_shape,
[src, weights],
lambda ins, outs: tvm.tir.call_packed(
"tvm.contrib.mkldnn.conv2d",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name is a bit confusing...so does the use of libraries. We now have USE_MKLDNN (for cblas with matmul/dense) and USE_DNNL (for DNNL/OneDNN with matmal/dense/conv2d). AFAIK, MKL-DNN can be covered by DNNL, so should we deprecate MKL-DNN and use DNNL for both cases (e.g., -libs and BYOC)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with that should be a unified name of DNNL, we already have USE_DNNL_CODEGEN and USE_MKLDNN for BYOC and -libs,I suggest to have USE_DNNL_LIBS for -libs and USE_DNNL_CODEGEN for BYOC, and change 'mkldnn' to 'dnnl' in codes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we really need to have 2 flags? It seems fine to enable both libs and BYOC when USE_DNNL is ON.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that would be more concise, 💯
I'll try and commit later.

@Qianshui-Jiang
Copy link
Contributor Author

@comaniac hi,I change the cmake config flags and the symbol mkldnn to dnnl, now setup USE_DNNL can make both -libs and dnnl_codegen enabled.
BTW,seems a lot of files are influenced, shoud we splite this PR into 2 threads? pls take a look and comments more. thx a lot !

@comaniac
Copy link
Contributor

comaniac commented Jun 8, 2022

It's a good idea. Please split the part that unifies MKL-DNN and DNNL to a separate PR.
Since it involves compatibility changes (e.g., config.cmake), it may need more reviews and visibility.

cc @masahi @areusch

@Qianshui-Jiang
Copy link
Contributor Author

@comaniac @masahi @areusch, hi guys,new PR created in #11638 , I would rebase this PR after that updated in mainline.
pls take a look, thx !

@Qianshui-Jiang
Copy link
Contributor Author

Qianshui-Jiang commented Jun 9, 2022

@comaniac I reset this PR to compatible with current mainline (use mkldnn), only contains the change described in title. or we can update them all later in #11638 ?

@comaniac
Copy link
Contributor

comaniac commented Jun 9, 2022

The current scope seems good to me. You could either make this PR upstream compatible and rebase #11638, or rebase this PR after 11638 has merged. I'm fine with either way.

@Qianshui-Jiang
Copy link
Contributor Author

The current scope seems good to me. You could either make this PR upstream compatible and rebase #11638, or rebase this PR after 11638 has merged. I'm fine with either way.

Seems #11638 was suggested to support -libs=mkldnn continuously but translated it to -libs=dnnl, litlle bit more tiny change would be considered,
Could you pls approvel merge this one at first? so that we could updating all of which related to -libs=mkldnn together in 11638.

Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. cc @masahi

juda pushed a commit to juda/tvm that referenced this pull request Jun 21, 2022
…"-libs=mkldnn" (apache#11571)

* enable oneDNN conv op by using -libs=mkldnn

* add channel last format support and let oneDNN chose blocked format.

* remove unnecessary changes

* reformat 3 files

* reformat 1 file

* change the argument name

* change the argument name

* rename the arguments

* fix cpp lint issue

* fix cpp lint issue

* fix cpp lint issue

* clang reformated

* adjust .py import for testing

* function existence check in test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants