msg289721 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2017-03-16 16:45 |
Currently, multiprocessing has hard-coded logic to re-seed the Python random generator (in the random module) whenever a process is forked. This is present in two places: `Popen._launch` in `popen_fork.py` and `serve_one` in `forkserver.py` (for the "fork" and "forkserver" spawn methods, respectively). However, other libraries would like to benefit from this mechanism. For example, Numpy has its own random number generator that would also benefit from re-seeding after fork(). Currently, this is solvable using multiprocessing.Pool which has an `initializer` argument. However, concurrent.futures' ProcessPool does not offer such facility; nor do other ways of launching child processes, such as (simply) instantiating a new Process object. Therefore, I'd like to propose adding a new top-level function in multiprocessing (and also a new Context method) to register a new initializer function for use after fork(). That way, each library can add its own initializers if desired, freeing users from the burden of doing so in their applications. |
|
|
msg289726 - (view) |
Author: Yury Selivanov (yselivanov) *  |
Date: 2017-03-16 18:24 |
Maybe a better way would be to proceed with http://bugs.python.org/issue16500? |
|
|
msg289728 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2017-03-16 19:01 |
That issue seems to have stalled as it seems to have focussed on low-level APIs, and also because it is proposing a new module with the API question that entails. Another possible stance is that os.fork() should be left as-is, as a low-level primitive, and this functionality should be provided by the higher-level multiprocessing module. |
|
|
msg289730 - (view) |
Author: Davin Potts (davin) *  |
Date: 2017-03-16 20:50 |
Having a read through and , I worry that this could again become bogged down with similar concerns. With the specific example of NumPy, I am not sure I would want its random number generator to be reseeded with each forked process. There are many situations where I very much need to preserve the original seed and/or current PRNG state. I do not yet see a clear, motivating use case even after reading those two older issues. I worry that if it were added it would (almost?) never get used either because the need is rare or because developers will more often think of how this can be solved in their own target functions when they first start up. The suggestion of a top-level function and Context method make good sense to me as a place to offer such a thing but is there a clearer use case? |
|
|
msg289731 - (view) |
Author: Nathaniel Smith (njs) *  |
Date: 2017-03-16 20:55 |
I think ideally on numpy's end we would reseed iff the RNG was unseeded. Now that I think about it I'm a little surprised that we haven't had more complaints about this, so I guess it's not a super urgent issue, but that would be an improvement over the status quo, I think. |
|
|
msg289733 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2017-03-16 21:00 |
The use case is quite clear here. The specific need to re-seed the Numpy PRNG has already come up in two different projects I work on: Numba and Dask. I wouldn't be surprised if other libraries have similar needs. If you want a reproducible RNG sequence, you should actually use a specific, explicit seed (and possibly instantiate a dedicated random state instead of using the default one). When not using an explicit seed, people expect different random numbers regardless of whether a function is executed in one or several processes. Note that multiprocessing *already* re-seeds the stdlib PRNG after fork, so re-seeding the Numpy PRNG is consistent with current behaviour. About it being rarely used: the aim is not use by application developers but by library authors; e.g. Numpy itself could register the re-seeding callback, which would free users from doing it themselves. It doesn't have to be used a lot to be useful. |
|
|
msg289735 - (view) |
Author: Yury Selivanov (yselivanov) *  |
Date: 2017-03-16 21:37 |
BTW, why can't you use `pthread_atfork` in numpy? |
|
|
msg294134 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2017-05-22 08:47 |
> BTW, why can't you use `pthread_atfork` in numpy? I suppose Numpy could use that, but it's much more involved than registering a Python-level callback. Also it would only work because Numpy actually implements its random generator in C. |
|
|
msg294237 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2017-05-23 08:02 |
For those who don't follow both issues: I finally submitted a PR for http://bugs.python.org/issue16500, aka have at-fork handlers that work with all Python-issued fork() calls (including subprocess). |
|
|
msg294238 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2017-05-23 08:04 |
The motivation for that decisions might seem a bit secondary, but I was worried that libraries who want to register a multiprocessing-based at-fork handler would always have to pay the import cost for multiprocessing. With the registration logic inside the posix module, the import cost goes away. |
|
|
msg294598 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2017-05-27 15:52 |
Superseded by issue 16500, which has now been resolved! |
|
|