Issue 8237: multiprocessing.Queue() blocks program (original) (raw)

multiprocessing.Queue() blocking program on my computer after adding 1400 entry (depending addition size).

Tested with 2.6.2 and 2.6.5(compiled from source with gcc 4.4.1) Using 64 bit OpenSUSE 11.2.

Output is:

.... 1398 done 1399 done

and enters deadlock because Q.put() cannot completed. No problems with basic array with lock(). Here the result after pressing CTRL+C:


^CTraceback (most recent call last): File "", line 1, in File "", line 5, in testQ KeyboardInterrupt

^CError in atexit._run_exitfuncs: Traceback (most recent call last): File "/opt/python/lib/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/opt/python/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function p.join() File "/opt/python/lib/python2.6/multiprocessing/process.py", line 119, in join res = self._popen.wait(timeout) File "/opt/python/lib/python2.6/multiprocessing/forking.py", line 117, in wait return self.poll(0) File "/opt/python/lib/python2.6/multiprocessing/forking.py", line 106, in poll pid, sts = os.waitpid(self.pid, flag) KeyboardInterrupt Error in sys.exitfunc: Traceback (most recent call last): File "/opt/python/lib/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/opt/python/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function p.join() File "/opt/python/lib/python2.6/multiprocessing/process.py", line 119, in join res = self._popen.wait(timeout) File "/opt/python/lib/python2.6/multiprocessing/forking.py", line 117, in wait return self.poll(0) File "/opt/python/lib/python2.6/multiprocessing/forking.py", line 106, in poll pid, sts = os.waitpid(self.pid, flag) KeyboardInterrupt

multiprocessing.Queue.Put() acts the same as Queue.put() - if the queue is full, the put call "hangs" until the queue is no longer full. The process will not exit, as the Queue is full, and it's waiting in put.

This works as designed, unless I'm missing something painfully obvious, which is entirely possible.

Firstly I think as you but this is not correct. Added Q.full() to know if Queue is full or not to the testQ code..

def testQ(): for i in range(10000): mp.Process( None, QueueWorker, None, (i,Q,lock) ).start() while len(mp.active_children())>=mp.cpu_count()+4: time.sleep(0.01) print Q.full()

output is: 1397 done 1398 done 1399 done False False False

So Queue is not full. And you can also add some things to queue at this state(by adding extra line to while loop) and this will not blocks while loop.

Please test..

It's a dupe of issue #8426: the Queue isn't full, but the underlying pipe is, so the feeder thread blocks on the write to the pipe (actually when trying to acquire the lock protecting the pipe from concurrent access). Since the children processes join the feeder thread on exit (to make sure all data has been flushed to pipe), they block.