Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change MKLDNN_THREADING definition from OMP:COMP to OMP #30230

Closed
wants to merge 1 commit into from

Conversation

CeadeS
Copy link

@CeadeS CeadeS commented Nov 21, 2019

MKLDNN determines by itself if intel openmp is present or not. Define COMP in advance prevents it from using intel openmp. This results in the behavior described in #29722

MKLDNN determines by itself if intel openmp is present or not. Setting this variable here prevents it from using intel openmp. This results in the behavior described in pytorch#29722
@ilia-cher
Copy link
Contributor

for future ref., from https://github.com/intel/mkl-dnn/blob/7d2fd500bc78936d1d648ca713b901012f470dbc/cmake/options.cmake

The default option is OMP, which gives a preference to OMP:INTEL, but if
neither Intel MKL is installed nor Intel MKL-ML is available then fallback
to OMP:COMP.

Copy link
Contributor

@ilia-cher ilia-cher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, can we get some better understanding first, why the original option didn't work?
OMP:COMP is supposed to use whatever library is supplied by a compiler

@CeadeS
Copy link
Author

CeadeS commented Nov 22, 2019

@ilia-cher thank you for your review.
I guess the right solution needs more than changing this option. It was just a small change that helped with my problem.
My observation was that the gnu openmp is much slower then the intel one. Therefore, linking it if the intel version is present should be suppressed entierly. PyTorch seems to default to gnu openmp if it is present.

@cpuhrsch
Copy link
Contributor

@CeadeS - this is a core change and we'll need to write a detailed study before we do this.

I assume the main question to answer here is
a) What MKL libraries do we currently link against and in what order of preference?
b) Which MKL library is better when and why?
c) Which parts of pytorch are consumers of the MKL library and do they each link to the same and if not, why not?

@ilia-cher
Copy link
Contributor

from the docs, as I understand the version of omp library would depend on which compiler you use

@sai-prasanna
Copy link

This also fixes related issue where forking process with pytorch models creates deadlocks on inference. Intel OMP doesn't cause deadlocks after forks.

@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jan 9, 2020
@facebook-github-bot
Copy link
Contributor

Hi @CeadeS!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@github-actions
Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label May 28, 2022
@github-actions github-actions bot closed this Jun 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed open source Stale triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants