-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing needs option to eschew fork() under Linux #52959
Comments
The "multiprocessing" module uses a bare fork() to create child processes under Linux, so the children get a copy of the entire state of the parent process. But under Windows, child processes are freshly spun-up Python interpreters with none of the data structures or open connections of the parent process available. This means that code that tests fine under Linux, because it is depending on residual parent state in a way that the programmer has not noticed, can fail spectacularly under Windows. Therefore, the "multiprocessing" module should offer an option under Linux that ignores the advantage of being able to do a bare fork() and instead spins up a new interpreter instance just like Windows does. Some developers will just use this for testing under Linux, so their test results are valid for Windows too; and some developers might even use this in production, preferring to give up a bit of efficiency under Linux in return for an application that will show the same behavior on both platforms. Either way, an option that lets the developer subvert the simple "sys.platform != 'win32'" check in "forking.py" would go a long way towards helping us write platform-agnostic Python programs. |
This is on my wish list; but I have not had time to do it. Patch welcome. |
Jesse, it's great to learn it's on your wish list too! Should I design the patch so that (a) there is some global in the module that needs tweaking to choose the child creation technique, or (b) that an argument to the Process() constructor forces a full interpreter exec to make all platforms match, or (c) that a process object once created has an attribute (like ".daemon") that you set before starting it off? Or (d) should there be a subclass of Process that, if specifically used, has the fork/exec behavior instead of just doing the fork? My vote would probably be for (b), but you have a much better feel for the library and its style than I do. |
I pretty much agree with (b) an argument - your gut instinct is correct - there's a long standing thread in python-dev which pretty much solidified my thinking about whether or not we need this (we do). Any patch has to be backwards compatible by the way, it can not alter the current default behavior, also, it has to target python3 as 2.7 is nearing final, and this is a behavioral change. |
+1 for this issue; I've also wished for this feature in the past. |
+1 |
I have suspected that this may be necessary, not just merely useful, for some time, and bpo-6721 seems to verify that. In addition to adding the keyword arg to Process, it should also be added to Pool and Manager. Is anyone working on a patch? If not I will work on a patch asap. |
No one is currently working on a patch AFAIK |
Here is a patch which adds the following functions: forking_disable()
forking_enable()
forking_is_enabled()
set_semaphore_prefix()
get_semaphore_prefix() To create child processes using fork+exec on Unix, call I have tested the patch on Linux (by adding forking_disable() to There are some issues with named semaphores. When forking is But if a process is killed without exiting cleanly then the name may /dev/shm/sem.mp-fa012c80-4019-2 which represent leaked semaphores. These won't be destroyed until the If some form of this patch is accepted, then the problem of leaked |
Small fix to patch. |
Thanks for the patch sbt.
Another - although less common - advantage over the current implementation is that now one can run out of memory pretty easily if the operating system doesn't do overcommitting if you work with a large dataset. If fork() is followed by an exec, no problem. Thoughts? |
There is probably lots of such code:
|
I'm not convinced about making it the default behaviour, and certainly not the only one. I have a working patch which ensures that leaked semaphores get cleaned up on exit. However, I think to add proper tests for the patch, test_multiprocessing needs to be refactored. Maybe we could end up with multiprocessing_common.py The actual unittests would be in multiprocessing_common.py and test_multiprocessing_others.py. The other files would run the unittests in multiprocessing_common.py using different configurations. Thoughts? |
Then I'm not convinced that this patch is useful. |
On Wednesday, December 21, 2011 at 10:04 AM, Charles-François Natali wrote: While I would tend to agree with you in theory - I don't think we should make it the default - at least not without a LOT of lead time. There's a surprising amount of code relying on the current behavior that I think the best course is to enable this option, and change the docs to steer users in this direction. For users jumping from 2.x into 3.x, I think the less surprises they have the better, and changing the default behavior of the stdlib module in this was would qualify as surprising. |
See also consolidated bpo-13558 for additional justification for processes option on OS X. |
Attached is an updated version of the mp_fork_exec.patch. This one is able to reliably clean up any unlinked semaphores if the program exits abnormally. |
mp_split_tests.patch splits up the test_multiprocessing.py: test_multiprocessing_misc.py mp_common.py test_multiprocessing_fork.py |
I don't know what the others think, but I'm still -1 on this patch.
|
A use case for not using fork() is when your parent process opens some system resources of some sort (for example a listening TCP socket). The child will then inherit those resources, which can have all kinds of unforeseen and troublesome consequences (for example that listening TCP socket will be left open in the child when it is closed in the parent, and so trying to bind() to the same port again will fail). Generally, I think having an option for zero-sharing spawning of processes would help code quality. |
By the way, instead of doing fork() + exec() in pure Python, you probably want to use _posixsubprocess.fork_exec(). |
+1 I still have to use parallel python (pp) in our application stack because the fork() approach causes a lot of strange issues in our application. It might be the punishment for embedding a Java runtime env into a Python process, too. :) |
The patch as it stands still depends on fd inheritance, so you would need to use FD_CLOEXEC on your listening socket. But yes, it should be possible to use the closefds feature of _posixsubprocess. BTW, I also have working code (which passes the unittests) that starts a helper process at the beginning of the program and which will fork processes on behalf of the other processes. This also solves the issue of unintended inheritance of resources (and the mixing of fork with threads) but is as fast as doing normal forks. |
For updated code see http://hg.python.org/sandbox/sbt#spawn This uses _posixsubprocess and closefds=True. |
http://hg.python.org/sandbox/sbt#spawn now contains support for starting processes via a separate server process. This depends on fd passing support. This also solves the problem of mixing threads and processes, but is much faster than using fork+exec. It seems to be just as fast as using plain fork. I have tested it successfully on Linux and a MacOSX buildbot. (OpenSolaris does not seem to support fd passing.) At the begining of your program you write multiprocessing.set_start_method('forkserver') to use the fork server. Alternatively you can use multiprocessing.set_start_method('spawn') to use _posixsubprocess.fork_exec() with closefds=True on Unix or multiprocessing.set_start_method('fork') to use the standard fork method. This branch also stops child processes on Windows from automatically inheriting inheritable handles. The test suite can be run with the different start methods by doing
|
Richard, apart from performance, what's the advantage of this approach over the fork+exec version? |
It is really just performance. For context running the unittests in a 1 cpu linux VM gives me fork: fork+exec: forkserver: So running the unit tests using fork+exec takes about 4 times as much cpu time. Starting then immediately joining a trivial process in a loop gives fork: 0.025 seconds/process So latency is about 10 times higher with fork+exec.
The different fork methods are now implemented in separate files. The line counts are 117 popen_spawn_win32.py I don't think any more sharing between the win32 and posix cases is possible. (Note that popen_spawn_posix.py implements a cleanup helper process which is also used by the "forkserver" method.)
Actually, avoiding the whole fork+threads mess is a big motivation. multiprocessing uses threads in a few places (like implementing Queue), and tries to do so as safely as possible. But unless you turn off garbage collection you cannot really control what code might be running in a background thread when the main thread forks.
OSX does not seem to allow passing multiple ancilliary messages at once -- but you can send multiple fds in a single ancilliary message. Also, when you send fds on OSX you have to wait for a response from the other end before doing anything else. Not doing that was the cause of the previous fd passing failures in test_multiprocessing. |
Numbers when running on Linux on a laptop with 2 cores + hyperthreading. RUNNING UNITTESTS: fork+exec: forkserver: LATENCY: Still 4 times the cpu time and 10 times the latency. But the latency is far lower than in the VM. |
I think the forkserver approach is a good idea. It is what a lot of users will choose. forkserver won't work everywhere though so the fork+exec option is still desirable to have available. Threads can be started by non-python code (extension modules, or the larger C/C++ program that is embedding the Python interpreter within it). In that context, by the time the multiprocessing module is can be too late to start a fork server and there is no easy way for Python code to determine if that is the case. The safest default would be fork+exec though we need to implement the fork+exec code as a C extension module or have it use subprocess (as I noted in the mb_fork_exec.patch review). |
That was an old version of the patch. In the branch
_posixsubprocess is used instead of fork+exec, and all unnecessary fds are closed. See
|
ah, i missed that update. cool! +1 |
The spawn branch is in decent shape, although the documentation is not up-to-date. I would like to commit before the first alpha. |
I have done quite a bit of refactoring and added some extra tests. When I try using the forkserver start method on the OSX Tiger buildbot (the only OSX one available) I get errors. I have disabled the tests for OSX, but it seemed to be working before. Maybe that was with a different buildbot. |
Richard, can you say what failed on the OS X 10.4 (Tiger) buildbot? FWIW, I tested b3620777f54c.diff (and commented out the darwin skip of test_multiprocessing_forkserver) on OS X 10.4, 10.5, and 10.8. There were no failures on any of them. The only vaguely suspicious message when running with -v was: ./python -m test -v test_multiprocessing_forkserver OK (skipped=5) # on 32-bit 'largest assignable fd number is too small' 1 test OK. |
There seems to be a problem which depends on the order in which you run ./python -m test -v \ Then I get lots of failures when forkserver runs. I have tracked down > The only vaguely suspicious message when running with -v was: That is expected and it shows the semaphore tracker is working as |
The forkserver process is now started using _posixsubprocess.fork_exec(). This should fix the order dependent problem mentioned before. Also the forkserver tests are now reenabled on OSX. |
I have added documentation now so I think it is ready to merge (except for a change to Makefile). |
Good for me. This is a very nice addition! |
New changeset 3b82e0d83bf9 by Richard Oudkerk in branch 'default': |
Thanks. I do see a couple of failed assertions on Windows which presumably happen in a child process because they do not cause a failure:
The assertion is in _PyGC_CollectNoFail() and checks that it is not called recursively. See
|
That's extremely weird. _PyGC_CollectNoFail() is only called from Perhaps you could try to find out in which test this happens? |
Using the custom builders, it seems to happen randomly in test_rlock: test_rlock (test.test_multiprocessing_spawn.WithManagerTestLock) ... Assertion failed: !collecting, file ..\Modules\gcmodule.c, line 1617 http://buildbot.python.org/all/builders/AMD64%20Windows%20Server%202008%20%5BSB%5D%20custom |
Ok, I enabled faulthandler in the child process and I got the explanation: multiprocessing's manager Server uses daemon threads... Daemon threads are not joined when the interpreter shuts down, they are simply "frozen" at some point. Unfortunately, it may happen that a deamon thread is "frozen" while it was doing a cyclic garbage collection, which later triggers the assert. I'm gonna replace the assert by a plain "if", then. |
The new tests produce a few warnings: $ ./python -m test -uall -v -j2 test_multiprocessing_fork
OK (skipped=4)
Warning -- threading._dangling was modified by test_multiprocessing_fork
Warning -- multiprocessing.process._dangling was modified by test_multiprocessing_fork
1 test altered the execution environment:
test_multiprocessing_fork I've seen test_multiprocessing_forkserver giving warnings too, while running the whole test suite, but can't reproduce them while running it alone. The warnings seems quite similar though, so a single fix might resolve the problem with all the tests. The "Using start method '...'" should also be displayed only when the tests are run in verbose mode. |
New changeset f6c7ad7d029a by Richard Oudkerk in branch 'default': New changeset e99832a60e63 by Richard Oudkerk in branch 'default': |
New changeset 6d998a43102b by Richard Oudkerk in branch 'default': |
Seems to be fixed now. |
New changeset b941a320601a by R David Murray in branch 'default': |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: