Issue 23267: multiprocessing pool.py doesn't close inqueue and outqueue pipes on termination (original) (raw)

Created on 2015-01-18 13:33 by shanip, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Messages (4)
msg234244 - (view) Author: shani (shanip) Date: 2015-01-18 13:33
Multiprocessing pool.py gets SimpleQueue objects as inqueue and outqueue. when it terminates, it doesn't call the close() method of the queues' readers and writers. As a results, 4 file pipes leak in one pool termination. Expected: The pool closes reader and writer pipes of the inqueue and outqueue when it terminates. What did happen: the pool doesn't close the pipes. 4 pipes leak.
msg234525 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2015-01-22 23:10
Interestingly, there is no close() method on SimpleQueue...
msg289180 - (view) Author: Camilla Montonen (Winterflower) Date: 2017-03-07 18:33
I did some investigating using a test script and Python 3.7.0a0 from multiprocessing import Pool import os import time def f(x): time.sleep(30) return x*x if __name__=='__main__': print('Main pid {0}'.format(os.getpid())) p = Pool(5) p.map(f, [1,2,3]) print('Returned') time.sleep(30) and grepping for pipe and the parentpid in the output from lsof ( lsof | grep python.*.*pipe ). The pipes opened at the start of the script are still open even after the line print('Returned') is executed. I suppose this is expected because I did not call *p.close()*. All pipes are cleaned up after the parent process finishes. When I repeat the experiment calling p.close() after p.map returns, all that is left is the 9 pipes opened by the parent. All pipes are cleaned up after parent script exits. @shani - could you please clarify how you were able to detect the leaking pipes?
msg298663 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-07-19 10:21
I think this issue is mistaken. The reader and writer objects are closed automatically when they are destroyed (see Connection.__del__). The only thing that may lack is a way to close them more eagerly. In any case, I'm closing as a duplicate of issue 30966.
History
Date User Action Args
2022-04-11 14:58:12 admin set github: 67456
2017-07-19 10:21:12 pitrou set status: open -> closeddependencies: - Add multiprocessing.queues.SimpleQueue.close()superseder: Add multiprocessing.queues.SimpleQueue.close()nosy: + pitroumessages: + resolution: duplicatestage: resolved
2017-07-19 09:47:59 xiang.zhang set dependencies: + Add multiprocessing.queues.SimpleQueue.close()
2017-05-17 08:55:37 xiang.zhang set nosy: + xiang.zhang
2017-03-08 04:44:45 josh.r set nosy: + josh.r
2017-03-07 18:33:13 Winterflower set messages: +
2017-02-12 20:36:24 Winterflower set nosy: + Winterflower
2015-01-22 23:10:19 neologix set nosy: + neologixmessages: +
2015-01-18 14:26:41 pitrou set nosy: + sbt, davin
2015-01-18 13:33:39 shanip create