multiprocessing.get_logger() logger deadlock on first call by subprocess to logger.info("...") due to internal logger.debug(...) call by multiprocessing.Queue._start_thread · Issue #91555 · python/cpython (original) (raw)
Python: v3.10.2
Platform: Windows 10
Description: A deadlock occurs when a subprocess uses a logging.Logger returned multiprocessing.get_logger() where the initial logging level at the time of the first call to logger.info is level DEBUG or lower, which causes, during that same initial call to logger.info, an internal call to logger.debug(...) (at top of multiprocessing.Queue._start_thread), all of which causes the same thread to attempt to re-acquire an already held lock.
Workaround: Set logging level to INFO or higher (something above DEBUG) to prevent the logging.debug() statement at the top of Queue._start_thread from attempting to init/lock while init is already in progress.
The following example exhibits the issue when SHOW_THE_DEADLOCK==True. Set SHOW_THE_DEADLOCK==False to observe the workaround.
from concurrent.futures import ProcessPoolExecutor import logging import logging.handlers from multiprocessing import freeze_support import multiprocessing import sys
SHOW_THE_DEADLOCK = True # True to show the bug, False to show workaround.
g_queue: multiprocessing.Queue = None def global_init(logging_queue): global g_queue g_queue = logging_queue
def subprocess_logging_test(): queue_handler = logging.handlers.QueueHandler(g_queue) l2 = multiprocessing.get_logger() l2.addHandler(queue_handler) if not SHOW_THE_DEADLOCK: l2.setLevel("INFO") # default level is UNSET, if level is <= DEBUG deadlock will occur, this prevents that. l2.info("write to log once at info and higher to perform thread init etc.") l2.setLevel("DEBUG") l2.info("If initial level is DEBUG, deadlock here, else OK.") l2.warning("If initial level is DEBUG, never reach this point, else OK.") l2.debug("If initial level is DEBUG, never reach this point, else OK.") l2.info("If initial level is DEBUG, never reach this point, else OK.")
def main(): global g_queue g_queue = multiprocessing.Queue(maxsize=99) handler = logging.StreamHandler(stream=sys.stdout) listener = logging.handlers.QueueListener(g_queue, handler) listener.start() with ProcessPoolExecutor( initializer=global_init, initargs=(g_queue,) ) as pexec: f = pexec.submit(subprocess_logging_test) f.result() listener.stop()
if name == 'main': freeze_support() main()
The following is an annotated stack from the above example when SHOW_THE_DEADLOCK==True.
put (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:92) with self._notempty: # <----- dead lock if self.thread is None: self.start_thread() self.buffer.append(obj) self.notempty.notify() put_nowait (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:138) enqueue (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\handlers.py:1423) emit (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\handlers.py:1461) handle (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init.py:968) callHandlers (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init.py:1696) handle (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init.py:1634) log (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init.py:1624) log (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init.py:1547) debug (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\util.py:50) start_thread (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:165) def start_thread(self): debug('Queue._start_thread()') # <----- at level DEBUG or lower, triggers logging call leading to deadlock. put (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:94) put_nowait (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:138) enqueue (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\handlers.py:1423) emit (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\handlers.py:1461) handle (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init_.py:968) callHandlers (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init.py:1696) handle (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init.py:1634) log (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init.py:1624) info (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging_init_.py:1477) subprocess_logging_test (c:\PythonTesting\logging_testing.py:25) _process_worker (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\concurrent\futures\process.py:243) run (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\process.py:108) _bootstrap (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\process.py:315) _main (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\spawn.py:129) spawn_main (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\spawn.py:116) (:1)