Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use parallel computing via future_lapply only when appropriate (detect multithreaded backend) #111

Open
jchiquet opened this issue Nov 14, 2023 · 0 comments

Comments

@jchiquet
Copy link
Member

As mentioned by in PR #110 by Cole Trapnell, future_lapply can significantly slow down the computation time when multicore plan is active on multithreaded backend (like OpenBlas) :

The issue is that on machines that use OpenBLAS with a multithreaded backend, using future can deadlock the session. A workaround is to wrap calls to future with something like this:

old_omp_num_threads = as.numeric(Sys.getenv("OMP_NUM_THREADS"))
if (is.na(old_omp_num_threads)){
old_omp_num_threads = 1
}
RhpcBLASctl::omp_set_num_threads(1)

old_blas_num_threads = as.numeric(Sys.getenv("OPENBLAS_NUM_THREADS"))
if (is.na(old_omp_num_threads)){
old_blas_num_threads = 1
}
RhpcBLASctl::blas_set_num_threads(1)
Then you do work with future and then:

RhpcBLASctl::omp_set_num_threads(old_omp_num_threads)
RhpcBLASctl::blas_set_num_threads(old_blas_num_threads)
We didn't add this because we didn't want to add a new dependency on RhpcBLASctl to the package, but you could do if you want to be able to do linear algebra inside of functions called by future

I suggest defining a PLN_lapply function which, depending on the architecture in place, directs towards a classic or multicore lapply. See if future is capable of this (via 'sequential' or 'multicore' plan).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant