Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-org and impl operators for DNNLOWP based-on mkldnn-bridge #17464

Closed
wants to merge 18 commits into from

Conversation

Projects
None yet
2 participants
@gujinghui
Copy link
Contributor

commented Feb 25, 2019

  1. adjust ideep code for DNNLOWP support readiness
  2. Impl operators for DNNLOWP
  3. Enable optimization for DNNLOWP

gujinghui added some commits Jul 24, 2018

Support group filte in IDEEP fusion
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Fix shufflenet regression to disable conv sum fusion if no avx512 sup…
…port

Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Upgrade mkldnn-bridge for dnnlowp support
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
update ideep code to use latest mkldnn-bridge API
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Remove duplicated code in Conv and ConvFusion
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Document type definition of MKL-DNN bridge.
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Prepare for DNNLOWP support
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Impl order swtich operators for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Impl Int8FC and cached weights for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Impl Int8SumRelu operator for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Impl Int8Conv operator for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Impl Int8Quantize/Int8Deuantize operator for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Impl Int8 tensor filler operators for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Impl Int8 pooling operators for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
@gujinghui

This comment has been minimized.

Copy link
Contributor Author

commented Feb 25, 2019

NOTE:
Please DO NOT merge.
Will rebase if no other changes are needed.

@gujinghui

This comment has been minimized.

Copy link
Contributor Author

commented Feb 25, 2019

gujinghui and others added some commits Feb 25, 2019

Impl Int8Relu operator for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Implement Int8UpsampleNearest op.
We implement Int8UpsampleNearest op on ideep device so that we don't need to fallback to CPU context to use it.
This op is always used by somde detectron models such as Faster-Rcnn, RetinaNet and so on.

Change-Id: Ic8d3c069cac0d019aaf96c1f1df18043af613e40
Signed-off-by: Hui Wu <hui.h.wu@intel.com>
Enable conv fusions and unset training mode for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
Fuse order switch and quantize operators for DNNLOWP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>

@gujinghui gujinghui force-pushed the gujinghui:dnnlowp_all branch from 5c3a91a to df16f6d Feb 25, 2019

@gujinghui gujinghui closed this Apr 11, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.