Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Joblib's BLAS disabling mechanism to prevent CPU over-subscription does not work #834

Open
mgbckr opened this issue Jan 16, 2019 · 4 comments
Open

Comments

@mgbckr
Copy link

@mgbckr mgbckr commented Jan 16, 2019

python version: 3.6.5
joblib version: 0.13.0/1
numpy version: 1.15.4

According to Section 2.4.7 "Avoiding over-subscription of CPU ressources" joblib turns off BLAS automatically. But it does not work for me. In the following code, X.dot(np.transpose(X)) should not launch multiple threads. But it does. If I enable the matching environment variable as indicated in the comments, things work as they should.

#import os
#os.environ['OPENBLAS_NUM_THREADS'] = '1'
#os.environ['MKL_NUM_THREADS'] = '1'
#os.environ['NUMEXPR_NUM_THREADS'] = '1'

import numpy as np
import joblib

X = np.random.rand(100,60000) # 60: no threads, 60000: lot's of threads ;)
def func():
    for i in range(10000):
        X.dot(np.transpose(X))
joblib.Parallel(n_jobs=2, verbose=10)(joblib.delayed(func)() for i in range(10000))

Update:
For my own scripts, I noticed that including the environment statements before numpy, scipy etc. prevents BLAS to kick in. However, in the above example importing joblib before numpy does not make a difference. Also, the documentation seems to suggest, that this only works for the 'loky' backend. However, forcing it by setting backend="loky" does not make a difference either (even if I set n_jobs=2 in which case the loky backend is definitely being used).

I can replicate the issue on 2/3 machines. On one machine it happened yesterday, today it does not. A little mysterious ;)

OS details:

NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

Kernel

Linux 4.15.0-43-generic x86_64
@ogrisel
Copy link
Contributor

@ogrisel ogrisel commented Jan 21, 2019

joblib.Parallel(n_jobs=1) means running in sequential mode: in this case joblib does nothing with respect to OpenMP / OpenBLAS / MKL threads.

The over-subscription protection of loky is only for the children worker processes (that are created when n_jobs >= 2).

@mgbckr
Copy link
Author

@mgbckr mgbckr commented Jan 21, 2019

As I mentioned. Even if I set n_jobs >= 2, this also happens. I tried again just to make sure.

Interestingly enough though, I also tried again on all of my three machines. And again, this happened only on two out of three machines.

Let me know if you need any specific info or statistics.

@nikita-astronaut
Copy link

@nikita-astronaut nikita-astronaut commented May 20, 2020

Dear all,

I also experienced the problem presented by @mgbckr . The only way to work this around was to manually set

os.environ['OMP_NUM_THREADS'] = '1' os.environ['OPENBLAS_NUM_THREADS'] = '1' os.environ['MKL_NUM_THREADS'] = '1' os.environ['VECLIB_MAXIMUM_THREADS'] = '1' os.environ['NUMEXPR_NUM_THREADS'] = '1'

@tomMoral
Copy link
Contributor

@tomMoral tomMoral commented May 21, 2020

Hi @nikita-astronaut
Thanks for the report.
This issue was opened for joblib 0.13 and should have been fixed in the lastest joblib version with #940 . Could you confirm that you see this issue with the latest joblib 0.15.1? If so, could you confirm that this is happening with numpy and that paste the output of np.show_config().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants