Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computing melspectrogram (so as some other features) hangs only with joblib(n_jobs >= 2) #381

Closed
keunwoochoi opened this issue Jul 1, 2016 · 9 comments
Labels
discussion Open-ended discussion for developers and users question Issues asking for help doing something wontfix Issues that we don't want to deal with: out of scope, intended behavior, deprecations, etc.

Comments

@keunwoochoi
Copy link
Contributor

The original code is redundant, but it can be summarised as below.

...
   Parallel(n_jobs=N_JOBS)(delayed(convert)(path) for path in paths)
...

def convert(path):
    x, sr = librosa.load(path)
    X = librosa.feature.melspectrogram(x)
  • It's okay when N_JOB = 1.
  • Also okay if the function is directly called.
  • STFT/CQT works fine.

I just checked out other librosa.feature functions.

  • chroma_*, spectral_*, tonnetz, zero crossing: working well
  • mfcc, melspectrogram, poly_feature, tempogram: hangs

Is it a known problem (if it's a problem)? I tested it after pip install librosa --upgrade, i.e. version 0.4.3 .

@dawenl
Copy link

dawenl commented Jul 1, 2016

Does joblib hang even without librosa (but with numpy)? It's a known problem that joblib can conflict with some blas (e.g. openblas) with multiprocessing. If you are using openblas, the following works for me:

import os
os.environ['OPENBLAS_NUM_THREADS'] = '1'

@keunwoochoi
Copy link
Contributor Author

Hm, I thought that was only if I use multiprocessing but perhaps whatever they are if it does multiprocessing (which is so annoying).
With the option it still hangs,

The test was on MacOS so far, but on ubuntu it's the same. :(

@stefan-balke
Copy link
Member

Got the same problem last week.

On MacOS joblib and numpy have their problems.

The solution: Use joblib's forkserver like this:

(py3) bash$ JOBLIB_START_METHOD='forkserver' python foo.py

This will set the variable JOBLIB_START_METHOD to forkserver for this python VM. This is a little bit slower due to more overhead but at least it works...

Have fun!

@keunwoochoi
Copy link
Contributor Author

That gives me a good reason to move from python2.7 to 3.4.. I'll try, thanks!

@bmcfee bmcfee added question Issues asking for help doing something wontfix Issues that we don't want to deal with: out of scope, intended behavior, deprecations, etc. discussion Open-ended discussion for developers and users labels Jul 1, 2016
@bmcfee
Copy link
Member

bmcfee commented Jul 1, 2016

Strange -- I've never had this problem, but thanks for reporting it here.

@keunwoochoi
Copy link
Contributor Author

The test was on MacOS so far, but on ubuntu it's the same. :(
No, it is not the same, with python 2.7 and ubuntu 14.04 it's okay.

I tried what @stefan-balke proposed - with python2.7 - on mac and it didn't work, haven't test on 3 yet.

@stefan-balke
Copy link
Member

Sorry, no experience with Python 2.7 and joblib but at some point, @faroit said he had problems when using it...

@faroit
Copy link
Contributor

faroit commented Jul 21, 2016

@keunwoochoi I cannot really remember what caused the problems but I am using standard python multiprocessing instead of joblib since then. There I found that some problems are caused by pickling errors. Those can sometimes be solved by using dill

@bmcfee
Copy link
Member

bmcfee commented Sep 23, 2016

Since this is a numpy/joblib issue, and not librosa, I'm closing this out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Open-ended discussion for developers and users question Issues asking for help doing something wontfix Issues that we don't want to deal with: out of scope, intended behavior, deprecations, etc.
Development

No branches or pull requests

5 participants