[Python-ideas] reducing multiprocessing.Queue contention (original) (raw)

Charles-François Natali cf.natali at gmail.com
Wed Jan 23 21:03:46 CET 2013


In general, this sounds good. There's indeed no reason to perform the serialization under a lock.

It would be great to have some measurements to see just how much it takes, though.

I was curious, so I wrote a quick and dirty patch (it's doesn't support timed get()/put(), so I won't post it here).

I used the attached script as benchmark: basically, it just spawns a bunch of processes that put()/get() to a queue some data repeatedly (10000 times a list of 1024 ints), and returns when everything has been sent and received.

The following tests have been made on an 8-cores box, from 1 reader/1 writer up to 4 readers/4 writers (it would be interesting to see with only 1 writer and multiple readers, but readers would keep waiting for input so it requires another benchmark):

Without patch: """ $ ./python /tmp/multi_queue.py took 0.7993290424346924 seconds with 1 workers took 1.8892168998718262 seconds with 2 workers took 3.075777053833008 seconds with 3 workers took 4.050479888916016 seconds with 4 workers """

With patch: """ $ ./python /tmp/multi_queue.py took 0.7730131149291992 seconds with 1 workers took 0.7471320629119873 seconds with 2 workers took 0.752316951751709 seconds with 3 workers took 0.8303961753845215 seconds with 4 workers """ -------------- next part -------------- A non-text attachment was scrubbed... Name: multi_queue.py Type: application/octet-stream Size: 1138 bytes Desc: not available URL: <http://mail.python.org/pipermail/python-ideas/attachments/20130123/f629a381/attachment.obj>



More information about the Python-ideas mailing list