bpo-28053: Allow custom reducer when using multiprocessing by pierreglaser · Pull Request #15058 · python/cpython (original) (raw)

You will notice that for the Process class, I chose to pass the reducer (which is the only bit of information we need from the context for now) to the Process class at construction instead of the Context

def process(self, *args, **kwargs):
    return self._Process(*args, reducer=self.get_reducer(), **kwargs)

This comes from the fact that theProcess is actually relying on the default context to be properly defined:

class Process(process.BaseProcess):
_start_method = None
@staticmethod
def _Popen(process_obj):
return _default_context.get_context().Process._Popen(process_obj)

For the process class, passing the context object a Process bind to would require changing it to something like:

class Process(process.BaseProcess): _start_method = None @staticmethod def _Popen(process_obj): if self._ctx is not None: return self._ctx.get_context()._Process._Popen(process_obj) return _default_context.get_context()._Process._Popen(process_obj)

with _Process being a Process class specified (or not, in this case the default is Process) by the user. You can see the infinite recursion here as if _Process is not specified or set to Process, self._ctx.get_context()._Process._Popen is Process._Popen... To prevent this situation we would need to add a bunch of guards, and this ends up being not so trivial.

So implementation-wise, it is easier to simply pass the reducer in BaseContext.process, and not the whole context. Also, one noticeable consequence is that the reducer in Process is static, as it is not called using something like self._ctx.get_reducer() as it is done elsewhere.

But I am opened to suggestions.