Issue 26903: ProcessPoolExecutor(max_workers=64) crashes on Windows (original) (raw)

Issue26903

Created on 2016-05-01 20:45 by diogocp, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Pull Requests
URL Status Linked Edit
PR 13132 merged bquinlan,2019-05-06 19:07
PR 13206 closed miss-islington,2019-05-08 18:05
PR 13643 merged miss-islington,2019-05-29 02:38
Messages (12)
msg264608 - (view) Author: Diogo Pereira (diogocp) Date: 2016-05-01 20:45
I'm using Python 3.5.1 x86-64 on Windows Server 2008 R2. Trying to run the ProcessPoolExecutor example [1] generates this exception: Exception in thread Thread-1: Traceback (most recent call last): File "C:\Program Files\Python35\lib\threading.py", line 914, in _bootstrap_inner self.run() File "C:\Program Files\Python35\lib\threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "C:\Program Files\Python35\lib\concurrent\futures\process.py", line 270, in _queue_management_worker ready = wait([reader] + sentinels) File "C:\Program Files\Python35\lib\multiprocessing\connection.py", line 859, in wait ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout) File "C:\Program Files\Python35\lib\multiprocessing\connection.py", line 791, in _exhaustive_wait res = _winapi.WaitForMultipleObjects(L, False, timeout) ValueError: need at most 63 handles, got a sequence of length 64 The problem seems to be related to the value of the Windows constant MAXIMUM_WAIT_OBJECTS (see [2]), which is 64. This machine has 64 logical cores, so ProcessPoolExecutor defaults to 64 workers. Lowering max_workers to 63 or 62 still results in the same exception, but max_workers=61 works fine. [1] https://docs.python.org/3.5/library/concurrent.futures.html#processpoolexecutor-example [2] https://hg.python.org/cpython/file/80d1faa9735d/Modules/_winapi.c#l1339
msg265007 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2016-05-06 19:11
The example runs fine, in about 1 second, on my 6 core (which I guess is 12 logical cores) Pentium. I am guessing that the default number of workers needs to be changed, at least on Windows, to min(#logical_cores, 60)
msg265086 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2016-05-07 18:23
Just noting that the `multiprocessing` module can be used instead. In the example, add import multiprocessing as mp and change with concurrent.futures.ProcessPoolExecutor() as executor: to with mp.Pool() as executor: That's all it takes. On my 4-core Win10 box (8 logical cores), that continued to work fine even when passing 1024 to mp.Pool() (although it obviously burned time and RAM to create over a thousand processes). Some quick Googling strongly suggests there's no reasonably general way to overcome the Windows-defined MAXIMUM_WAIT_OBJECTS=64 for implementations that call the Windows WaitForMultipleObjects().
msg265206 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2016-05-09 16:15
> Some quick Googling strongly suggests there's no reasonably general way to overcome the Windows-defined MAXIMUM_WAIT_OBJECTS=64 for implementations that call the Windows WaitForMultipleObjects(). The recommended way to deal with this is to spin up threads to do the wait (which sounds horribly inefficient, but threads on Windows are cheap, especially if they are waiting on kernel objects), and then wait on each thread. Personally I think it'd be fine to make the _winapi module do that transparently for WaitForMultipleObjects, as it's complicated to get right (you need to ensure you map back to the original handle, timeouts and cancellation get complicated, there are real race conditions (mainly for auto-reset events), etc.), but in all circumstances it's better than just failing immediately. Handling it within multiprocessing isn't a bad idea, but won't help other users. I'd love to write the code to do it, but I doubt I'll get time (especially since I'm missing the PyCon US sprints this year). Happy to help someone else through it. We're going to see Python being used on more and more multicore systems over time, where this will become a genuine issue.
msg340390 - (view) Author: Robert Collins (rbcollins) * (Python committer) Date: 2019-04-17 10:59
This is now showing up in end user tools like black: https://github.com/ambv/black/issues/564
msg341545 - (view) Author: Brian Quinlan (bquinlan) * (Python committer) Date: 2019-05-06 15:48
If no one has short-term plans to improve multiprocessing.connection.wait, then I'll update the docs to list this limitation, ensure that ProcessPoolExecutor never defaults to >60 processes on windows and raises a ValueError if the user explicitly passes a larger number.
msg341571 - (view) Author: Brian Quinlan (bquinlan) * (Python committer) Date: 2019-05-06 17:36
BTW, the 61 process limit comes from: 63 - -
msg341918 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2019-05-08 18:04
New changeset 39889864c09741909da4ec489459d0197ea8f1fc by Steve Dower (Brian Quinlan) in branch 'master': bpo-26903: Limit ProcessPoolExecutor to 61 workers on Windows (GH-13132) https://github.com/python/cpython/commit/39889864c09741909da4ec489459d0197ea8f1fc
msg343858 - (view) Author: Ned Deily (ned.deily) * (Python committer) Date: 2019-05-29 03:12
New changeset 8ea0fd85bc67438f679491fae29dfe0a3961900a by Ned Deily (Miss Islington (bot)) in branch '3.7': bpo-26903: Limit ProcessPoolExecutor to 61 workers on Windows (GH-13132) (GH-13643) https://github.com/python/cpython/commit/8ea0fd85bc67438f679491fae29dfe0a3961900a
msg365886 - (view) Author: Mike Hommey (Mike Hommey) Date: 2020-04-07 03:01
This is still a problem in python 3.7 (and, I guess 3.8). When not even giving a max_workers, it fails with a ValueError exception on _winapi.WaitForMultipleObjects, with the message "need at most 63 handles, got a sequence of length 63" That happens with max_workers=None and max_workers=61 ; not max_workers=60. I wonder if there's an off-by-one in this test: https://github.com/python/cpython/blob/7668a8bc93c2bd573716d1bea0f52ea520502b28/Modules/_winapi.c#L1708
msg365901 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2020-04-07 10:25
More likely there's been another change to the events that are listened to by multiprocessing, which didn't update the overall limit. File a new bug, please.
msg366314 - (view) Author: Ray Donnelly (Ray Donnelly) * Date: 2020-04-13 13:36
I took the liberty of filing this: https://bugs.python.org/issue40263 Cheers.
History
Date User Action Args
2022-04-11 14:58:30 admin set github: 71090
2022-01-29 19:57:11 iritkatriel link issue39339 superseder
2021-11-04 13:56:41 eryksun set nosy: - Alex.Willmer, ahmedsayeed1982
2021-11-04 13:54:46 eryksun set messages: -
2021-11-04 12:13:08 ahmedsayeed1982 set versions: + Python 3.6, - Python 3.7, Python 3.8nosy: + Alex.Willmer, ahmedsayeed1982, - tim.peters, terry.reedy, paul.moore, bquinlan, rbcollins, tim.golden, ned.deily, sbt, zach.ware, steve.dower, davin, diogocp, Ray Donnelly, Mike Hommeymessages: + components: + Cross-Build, - Windows
2020-04-13 13:36:26 Ray Donnelly set nosy: + Ray Donnellymessages: +
2020-04-07 10:25:16 steve.dower set messages: +
2020-04-07 03:01:49 Mike Hommey set nosy: + Mike Hommeymessages: +
2019-05-29 03:13:32 ned.deily set versions: - Python 3.5, Python 3.6, Python 3.9
2019-05-29 03:12:47 ned.deily set nosy: + ned.deilymessages: +
2019-05-29 02:38:41 miss-islington set pull_requests: + <pull%5Frequest13539>
2019-05-09 17:37:35 bquinlan set status: open -> closedresolution: fixedstage: patch review -> resolved
2019-05-08 18:05:04 miss-islington set pull_requests: + <pull%5Frequest13117>
2019-05-08 18:04:58 steve.dower set messages: +
2019-05-06 19:07:35 bquinlan set keywords: + patchstage: needs patch -> patch reviewpull_requests: + <pull%5Frequest13045>
2019-05-06 17:36:09 bquinlan set messages: +
2019-05-06 15:53:57 bquinlan set assignee: bquinlan
2019-05-06 15:48:56 bquinlan set messages: +
2019-04-17 10:59:22 rbcollins set nosy: + rbcollinsmessages: + versions: + Python 3.7, Python 3.8, Python 3.9
2016-07-02 20:22:58 davin set nosy: + davin
2016-05-09 16:16:21 steve.dower set stage: needs patchtype: behaviorversions: + Python 3.6
2016-05-09 16:15:50 steve.dower set messages: +
2016-05-07 18:23:03 tim.peters set nosy: + tim.petersmessages: +
2016-05-07 11:19:53 pitrou set nosy: + sbt
2016-05-06 19:11:20 terry.reedy set nosy: + terry.reedy, bquinlanmessages: +
2016-05-01 20:45:38 diogocp create