-
-
Notifications
You must be signed in to change notification settings - Fork 29.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow registering after-fork initializers in multiprocessing #74014
Comments
Currently, multiprocessing has hard-coded logic to re-seed the Python random generator (in the random module) whenever a process is forked. This is present in two places: However, other libraries would like to benefit from this mechanism. For example, Numpy has its own random number generator that would also benefit from re-seeding after fork(). Currently, this is solvable using multiprocessing.Pool which has an Therefore, I'd like to propose adding a new top-level function in multiprocessing (and also a new Context method) to register a new initializer function for use after fork(). That way, each library can add its own initializers if desired, freeing users from the burden of doing so in their applications. |
Maybe a better way would be to proceed with http://bugs.python.org/issue16500? |
That issue seems to have stalled as it seems to have focussed on low-level APIs, and also because it is proposing a new module with the API question that entails. Another possible stance is that os.fork() should be left as-is, as a low-level primitive, and this functionality should be provided by the higher-level multiprocessing module. |
Having a read through bpo-16500 and bpo-6721, I worry that this could again become bogged down with similar concerns. With the specific example of NumPy, I am not sure I would want its random number generator to be reseeded with each forked process. There are many situations where I very much need to preserve the original seed and/or current PRNG state. I do not yet see a clear, motivating use case even after reading those two older issues. I worry that if it were added it would (almost?) never get used either because the need is rare or because developers will more often think of how this can be solved in their own target functions when they first start up. The suggestion of a top-level function and Context method make good sense to me as a place to offer such a thing but is there a clearer use case? |
I think ideally on numpy's end we would reseed iff the RNG was unseeded. Now that I think about it I'm a little surprised that we haven't had more complaints about this, so I guess it's not a super urgent issue, but that would be an improvement over the status quo, I think. |
The use case is quite clear here. The specific need to re-seed the Numpy PRNG has already come up in two different projects I work on: Numba and Dask. I wouldn't be surprised if other libraries have similar needs. If you want a reproducible RNG sequence, you should actually use a specific, explicit seed (and possibly instantiate a dedicated random state instead of using the default one). When not using an explicit seed, people expect different random numbers regardless of whether a function is executed in one or several processes. Note that multiprocessing *already* re-seeds the stdlib PRNG after fork, so re-seeding the Numpy PRNG is consistent with current behaviour. About it being rarely used: the aim is not use by application developers but by library authors; e.g. Numpy itself could register the re-seeding callback, which would free users from doing it themselves. It doesn't have to be used a lot to be useful. |
BTW, why can't you use |
I suppose Numpy could use that, but it's much more involved than registering a Python-level callback. Also it would only work because Numpy actually implements its random generator in C. |
For those who don't follow both issues: I finally submitted a PR for http://bugs.python.org/issue16500, aka have at-fork handlers that work with all Python-issued fork() calls (including subprocess). |
The motivation for that decisions might seem a bit secondary, but I was worried that libraries who want to register a multiprocessing-based at-fork handler would always have to pay the import cost for multiprocessing. With the registration logic inside the posix module, the import cost goes away. |
Superseded by bpo-16500, which has now been resolved! |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: