coverage with Python3.8b2 breaks multiprocessing · Issue #828 · nedbat/coveragepy (original) (raw)

Describe the bug
When using coverage with concurrency=multiprocessing under Python3.8b2, forked processes don't fully init. The child worker process starts, and the process init code runs, but the actual Process run target never executes. The minimal repro sample below works fine under other released versions of Python.

To Reproduce
How can we reproduce the problem? Please be specific.

  1. What version of Python are you running? 3.8b2
  2. What versions of what packages do you have installed? coverage==4.5.3
  3. What code are you running?

repro.py

import multiprocessing


class Worker(multiprocessing.Process):
    def __init__(self, result_queue, input):
        print("in child init")
        super(Worker, self).__init__()
        self._rq = result_queue
        self._input = input
        print("child init done")

    def run(self):
        print("in child run")
        self._rq.put("worker ran with {0}".format(self._input))


rq = multiprocessing.Queue()

w = Worker(rq, "hello")
w.start()

print("worker pid is {0}, waiting for results...".format(w.pid))

results = rq.get()
print(results)
print("done")
  1. What commands did you run? coverage3 run --concurrency=multiprocessing repro.py

Expected behavior
child workloads complete, "done" is printed (along with some debug info)

Additional context
We started hitting this recently in Ansible's nightly coverage runs (which include Python 3.8 prereleases as a canary)...