-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DNNL][Relay extern-schedule] DNNL Conv2D Kernel enable by assigning "-libs=mkldnn" #11571
Conversation
@comaniac @AndrewZhaoLuo @junrushao1994 @masahi could you pls help review and have some suggestions on how to organized the file to USE_MKLDNN not only for cblas ? should the test case added in test_dnnl.py? |
Will take a look early next week. |
@yangulei thanks for your suggesttion ! : ) and I've added the check for registed external function in test case, seems it works~ |
out_shape, | ||
[src, weights], | ||
lambda ins, outs: tvm.tir.call_packed( | ||
"tvm.contrib.mkldnn.conv2d", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name is a bit confusing...so does the use of libraries. We now have USE_MKLDNN
(for cblas with matmul/dense) and USE_DNNL
(for DNNL/OneDNN with matmal/dense/conv2d). AFAIK, MKL-DNN can be covered by DNNL, so should we deprecate MKL-DNN and use DNNL for both cases (e.g., -libs
and BYOC)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with that should be a unified name of DNNL, we already have USE_DNNL_CODEGEN and USE_MKLDNN for BYOC and -libs
,I suggest to have USE_DNNL_LIBS for -libs
and USE_DNNL_CODEGEN for BYOC, and change 'mkldnn' to 'dnnl' in codes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we really need to have 2 flags? It seems fine to enable both libs and BYOC when USE_DNNL is ON.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that would be more concise, 💯
I'll try and commit later.
@comaniac hi,I change the cmake config flags and the symbol |
aa4966f
to
4a42fa0
Compare
The current scope seems good to me. You could either make this PR upstream compatible and rebase #11638, or rebase this PR after 11638 has merged. I'm fine with either way. |
Seems #11638 was suggested to support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. cc @masahi
…"-libs=mkldnn" (apache#11571) * enable oneDNN conv op by using -libs=mkldnn * add channel last format support and let oneDNN chose blocked format. * remove unnecessary changes * reformat 3 files * reformat 1 file * change the argument name * change the argument name * rename the arguments * fix cpp lint issue * fix cpp lint issue * fix cpp lint issue * clang reformated * adjust .py import for testing * function existence check in test
This PR mainly about mapping oneDNN OP implementation in X86 Relay Op Strategy. we've observed that nn.dense kernel that could be dispatched to DNNL by assigning "-libs=mkldnn" and there is also conv2d kernel implemented in runtime/contrib/dnnl.
so we mapping it in X86 Relay Op Strategy and optimized the kernel implementation to let DNNL choose blocked format according to different input shape, as performance-profiling example discribed in oneDNN doc.
Here is the details:
We are trying to enable more DNNL kernels including different format and datatypes this way.