Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[INTEL MKL] Add MKL-DNN quantized Matmul op with some fusions - Part2. #26910

Merged

Conversation

mdfaijul
Copy link
Contributor

@mdfaijul mdfaijul commented Mar 20, 2019

This PR is to replace an older PR #26271 by splitting into three. Current PR is part2. It enables MKL-DNN quantized Matmul ops through graph optimization.

@mdfaijul mdfaijul changed the title [INTEL MKL] Enable MKL-DNN quantized Matmul ops through graph optimization. [INTEL MKL] Add MKL-DNN quantized Matmul op with some fusions - Part2. Mar 20, 2019
@rthadur rthadur requested a review from penpornk March 20, 2019 04:31
@rthadur rthadur self-assigned this Mar 20, 2019
@rthadur rthadur added this to Assigned Reviewer in PR Queue via automation Mar 20, 2019
@rthadur rthadur added comp:mkl MKL related issues size:L CL Change Size: Large labels Mar 20, 2019
claynerobison added a commit to Intel-tensorflow/tensorflow that referenced this pull request Mar 28, 2019
Copy link
Member

@penpornk penpornk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this PR depend on #26909 (Part 1)?

@mdfaijul
Copy link
Contributor Author

mdfaijul commented May 16, 2019

@penpornk Yes, it depends on #26909 (Part 1).

@rthadur rthadur added the prime for PR prioritization label May 16, 2019
@rthadur rthadur requested a review from penpornk May 16, 2019 21:22
Copy link
Member

@penpornk penpornk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the PR is good to go. Once #26909 is merged, please rebase the PR and I'll approve it. Thank you again!

Copy link
Member

@penpornk penpornk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part 1 (#26909) has been merged. (Hasn't shown on Github yet, but it will soon.) And I realized that all three PRs are guarded with INTEL_MKL anyway, so we probably won't have problems with the tests on Github. Will approve part 3 too.

PR Queue automation moved this from Assigned Reviewer to Approved by Reviewer May 29, 2019
@tensorflow-bot tensorflow-bot bot added kokoro:force-run Tests on submitted change ready to pull PR ready for merge process labels May 29, 2019
@kokoro-team kokoro-team removed the kokoro:force-run Tests on submitted change label May 29, 2019
@penpornk
Copy link
Member

There are four failed tests. MacOS Python2 and CC, Ubuntu Makefile, Ubuntu Python 2 seem unrelated to this PR. I can't view the log for Windows Bazel but it's likely to be unrelated too.

@rthadur Could you please help pull this PR in? Thank you very much! :)

@tensorflow-copybara tensorflow-copybara merged commit ff5b7f2 into tensorflow:master May 30, 2019
PR Queue automation moved this from Approved by Reviewer to Merged May 30, 2019
tensorflow-copybara pushed a commit that referenced this pull request May 30, 2019
@mdfaijul mdfaijul deleted the amin/qmatmul-part2 branch February 18, 2020 16:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes comp:mkl MKL related issues prime for PR prioritization ready to pull PR ready for merge process size:L CL Change Size: Large
Projects
PR Queue
  
Merged
Development

Successfully merging this pull request may close these issues.

None yet

6 participants