New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ProcessPoolExecutor(max_workers=64) crashes on Windows #71090
Comments
I'm using Python 3.5.1 x86-64 on Windows Server 2008 R2. Trying to run the ProcessPoolExecutor example [1] generates this exception: Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Program Files\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Program Files\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\Python35\lib\concurrent\futures\process.py", line 270, in _queue_management_worker
ready = wait([reader] + sentinels)
File "C:\Program Files\Python35\lib\multiprocessing\connection.py", line 859, in wait
ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout)
File "C:\Program Files\Python35\lib\multiprocessing\connection.py", line 791, in _exhaustive_wait
res = _winapi.WaitForMultipleObjects(L, False, timeout)
ValueError: need at most 63 handles, got a sequence of length 64 The problem seems to be related to the value of the Windows constant MAXIMUM_WAIT_OBJECTS (see [2]), which is 64. This machine has 64 logical cores, so ProcessPoolExecutor defaults to 64 workers. Lowering max_workers to 63 or 62 still results in the same exception, but max_workers=61 works fine. [1] https://docs.python.org/3.5/library/concurrent.futures.html#processpoolexecutor-example |
The example runs fine, in about 1 second, on my 6 core (which I guess is 12 logical cores) Pentium. I am guessing that the default number of workers needs to be changed, at least on Windows, to min(#logical_cores, 60) |
Just noting that the import multiprocessing as mp and change with concurrent.futures.ProcessPoolExecutor() as executor: to with mp.Pool() as executor: That's all it takes. On my 4-core Win10 box (8 logical cores), that continued to work fine even when passing 1024 to mp.Pool() (although it obviously burned time and RAM to create over a thousand processes). Some quick Googling strongly suggests there's no reasonably general way to overcome the Windows-defined MAXIMUM_WAIT_OBJECTS=64 for implementations that call the Windows WaitForMultipleObjects(). |
The recommended way to deal with this is to spin up threads to do the wait (which sounds horribly inefficient, but threads on Windows are cheap, especially if they are waiting on kernel objects), and then wait on each thread. Personally I think it'd be fine to make the _winapi module do that transparently for WaitForMultipleObjects, as it's complicated to get right (you need to ensure you map back to the original handle, timeouts and cancellation get complicated, there are real race conditions (mainly for auto-reset events), etc.), but in all circumstances it's better than just failing immediately. Handling it within multiprocessing isn't a bad idea, but won't help other users. I'd love to write the code to do it, but I doubt I'll get time (especially since I'm missing the PyCon US sprints this year). Happy to help someone else through it. We're going to see Python being used on more and more multicore systems over time, where this will become a genuine issue. |
This is now showing up in end user tools like black: psf/black#564 |
If no one has short-term plans to improve multiprocessing.connection.wait, then I'll update the docs to list this limitation, ensure that ProcessPoolExecutor never defaults to >60 processes on windows and raises a ValueError if the user explicitly passes a larger number. |
BTW, the 61 process limit comes from: 63 - <the result queue reader> - <the thread wakeup reader> |
This is still a problem in python 3.7 (and, I guess 3.8). When not even giving a max_workers, it fails with a ValueError exception on _winapi.WaitForMultipleObjects, with the message "need at most 63 handles, got a sequence of length 63" That happens with max_workers=None and max_workers=61 ; not max_workers=60. I wonder if there's an off-by-one in this test: Line 1708 in 7668a8b
|
More likely there's been another change to the events that are listened to by multiprocessing, which didn't update the overall limit. File a new bug, please. |
I took the liberty of filing this: https://bugs.python.org/issue40263 Cheers. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: