msg139754 - (view) |
Author: Luke (lcampagn) |
Date: 2011-07-04 11:43 |
I have found that when using multiprocessing.Connection objects to pass data between two processes, closing one end of the pipe is not properly communicated to the other end. My expectation was that when calling recv() on the remote end, it should raise EOFError if the pipe has been closed. Instead, the remote recv() blocks indefinitely. This behavior exists on Linux and Cygwin, but NOT on native Windows. Example: import multiprocessing as m def fn(pipe): print "recv:", pipe.recv() print "recv:", pipe.recv() if __name__ == '__main__': p1, p2 = m.Pipe() pr = m.Process(target=fn, args=(p2,)) pr.start() p1.send(1) p1.close() ## should generate EOFError in remote process |
|
|
msg139765 - (view) |
Author: Charles-François Natali (neologix) *  |
Date: 2011-07-04 13:24 |
That's because the other end of the pipe (p1) is open in the child process (FDs are inherited on fork()). Just add p1.close() at the beginning of fn() and you'll get EOF. Closing as invalid. |
|
|
msg139769 - (view) |
Author: Luke (lcampagn) |
Date: 2011-07-04 13:56 |
That's interesting, thanks for your response. It is also a bit awkward.. Might I recommend adding a note to the documentation? It is not really intuitive that each child should need to close the end of the pipe it isn't using (especially since it is possible to create a child that has no explicit access to that end of the pipe, even though it has inherited the file descriptor). 2011/7/4 Charles-François Natali <report@bugs.python.org> > > Charles-François Natali <neologix@free.fr> added the comment: > > That's because the other end of the pipe (p1) is open in the child process > (FDs are inherited on fork()). > Just add > p1.close() > > at the beginning of fn() and you'll get EOF. > Closing as invalid. > > ---------- > nosy: +neologix > resolution: -> invalid > stage: -> committed/rejected > status: open -> closed > > _______________________________________ > Python tracker <report@bugs.python.org> > <http://bugs.python.org/issue12488> > _______________________________________ > |
|
|
msg139788 - (view) |
Author: Charles-François Natali (neologix) *  |
Date: 2011-07-04 16:57 |
Well, in this regard it behaves like a Unix pipe/socket (in the duplex case it's implemented with a Unix domain socket), so I find it quite natural (of course, you have to know about FD inheritance upon fork()). I'm not convinced it's necessary, Antoine any thought on that? |
|
|
msg139789 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2011-07-04 17:07 |
Well, I think it deserves a comment in the documentation that behaviour of Pipes and Queues when one of the process terminates is undefined and implementation-dependent. By the way, there's internal support in 3.3 to reliably detect killed children, and it's used by concurrent.futures: http://docs.python.org/dev/library/concurrent.futures.html#concurrent.futures.BrokenProcessPool. However, I'm not sure there's an easy way to detect a killed master process from one of the worker processes. |
|
|
msg139791 - (view) |
Author: Charles-François Natali (neologix) *  |
Date: 2011-07-04 17:11 |
Alright. Luke, if you're motivated, feel free to provide a patch. The relevant file is Doc/library/multiprocessing.rst. |
|
|
msg139792 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2011-07-04 17:15 |
By the way, if you don't want children processes to continue running when the master exits, just make them daemonic processes (by adding "daemon=True" to the Process() constructor call). |
|
|
msg159282 - (view) |
Author: Sye van der Veen (syeberman) * |
Date: 2012-04-25 13:25 |
This issue _does_ exist on Windows, and is not limited to the case where the master process exits before its children. The following code, which is almost exactly that from the 2.7.3 documentation, deadlocks on Win7 (Py3.2 and 2.7) and WinXP (Py3.2 and 2.6): from multiprocessing import Process, Pipe import sys def f(conn): #conn.send([42, None, 'hello']) # uncomment second conn.close() if __name__ == "__main__": parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() #child_conn.close() # uncomment first sys.stdout.write( "about to receive\n" ) sys.stdout.write( "%s\n"%parent_conn.recv() ) sys.stdout.write( "received\n" ) p.join() If you "uncomment first", recv raises an EOFError; if you also "uncomment second", recv succeeds. If this behaviour is the same on other platforms, then it seems all that is required is to update the documentation. |
|
|