Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Intel Mkl] Upgrading MKL-DNN to 0.20.6 to fix SGEMM regression #33213

Merged
merged 1 commit into from Oct 14, 2019

Conversation

@claynerobison
Copy link
Contributor

@claynerobison claynerobison commented Oct 10, 2019

This PR to r1.15 is an alternative to reverting mkl-dnn to v0.18. It fixes SGEMM regressions as well as issues that were originally fixed by the mkl-dnn upgrade to 0.20.3 (see #31910 and https://github.com/intel/mkl-dnn/releases/tag/v0.20.6). @martinwicke, @penpornk, @goldiegadde Can you verify whether this fixes the issues for you?

@claynerobison
Copy link
Contributor Author

@claynerobison claynerobison commented Oct 10, 2019

@agramesh1

Copy link
Member

@penpornk penpornk left a comment

Thank you very much @claynerobison! The changes look good to me. I'll defer to the TF1.15 owners for approval decision.

@martinwicke
Copy link
Member

@martinwicke martinwicke commented Oct 10, 2019

@rthadur rthadur self-assigned this Oct 10, 2019
@rthadur rthadur added this to Assigned Reviewer in PR Queue via automation Oct 10, 2019
@goldiegadde goldiegadde assigned goldiegadde and unassigned rthadur Oct 10, 2019
PR Queue automation moved this from Assigned Reviewer to Approved by Reviewer Oct 10, 2019
@goldiegadde goldiegadde merged commit 07bf663 into tensorflow:r1.15 Oct 14, 2019
6 of 8 checks passed
PR Queue automation moved this from Approved by Reviewer to Merged Oct 14, 2019
@nammbash nammbash deleted the mkl-dnn-0.20.6 branch Jun 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
PR Queue
  
Merged
Linked issues

Successfully merging this pull request may close these issues.

None yet

7 participants