New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zombies when processx (via callr) and parallel are used in the same session. #113
Comments
Update: the following is sufficient to produce the zombies I am seeing. for (i in 1:2){
parallel::mclapply(1:2, sqrt, mc.cores = 2)
processx::run("ls")
} |
Put all the tests involving processx at the end (after all the SIGCHLD stuff that the parallel package uses).
The conditions for zombies seem to be a bit more specific. Zombies spawn here: parallel::mclapply(1:2, sqrt, mc.cores = 2)
processx::run("ls")
parallel::mclapply(1:2, sqrt, mc.cores = 2) but not in any of these example sessions: parallel::mclapply(1:2, sqrt, mc.cores = 2)
processx::run("ls") processx::run("ls")
parallel::mclapply(1:2, sqrt, mc.cores = 2) processx::run("ls")
parallel::mclapply(1:2, sqrt, mc.cores = 2)
processx::run("ls") |
Looks like you have a workaround, so I'll close this. As I said at the callr issue, fork clusters are not reliable, and should not be used. I'll think about a solution for the signal handle clashes, right now I only have hacky and dangerous workarounds. |
Sounds reasonable, thanks! |
You mean fork clusters as created by But now after switching to R 3.6 I'm also seeing this Thanks. |
https://duckduckgo.com/?q=%22fork+without+exec%22&t=canonical&ia=web Unfortunately there is no workaround, only to avoid using both fork clusters and processx together. |
So the problem seems to be with multi-threaded programs.
The problem is that I have a large codebase using mclapply. What would you recommend to use instead ? |
On some platforms the system libraries are, and they don't support fork without exec. It is also not just a problem with multi-threaded programs. See e.g.
First of all, is that "error" really an R error or just a message? Because it is the latter, and you are not seeing other issues, e.g. zombies or crashes, then you can just ignore it. Also, can you pls open a new issue for this, maybe I can look into some hack to notify parallel that its subprocess finished. |
It seems to just be a message. But it also prevents quitting the R console/session normally. Thanks! |
I think what is happening is that
|
The
drake
package tries to be a high-performance computing engine, so it includes some small unit tests of its parallel backends. In R 3.5.0, when I run these tests withtest_check()
ordevtools::test()
, I see a bunch of zombie processes when I calltop -bn1 | grep R$
in a Linux shell. Then, when I quit the R session, I see "Error while shutting down parallel: unable to terminate some child processes". This does not seem to happen if I usedevtools::check()
or revert to R 3.4.4.The tests use
callr
andparallel
at various points, though never in the same test. If I run each test in a separate R session, or if I deactivate only the uses ofcallr
, there are no zombies.So far, I have not been able to create a small reprex. I will work further on this, but in the meantime, I think general advice will help me debug.
Related: r-lib/testthat#757
The text was updated successfully, but these errors were encountered: