Handling SIGINT in multiprocessing on Windows (original) (raw)

On Linux the following code allows the main process to shutdown its child process when a SIGINT is received (if Ctrl+C is pressed after the Wait for event message):

import multiprocessing as mp
import signal
import time

def func(shutdown_event):
    signal.signal(signal.SIGINT, signal.SIG_IGN)
    print("Wait for event")
    is_set = shutdown_event.wait(timeout=10)
    print(f"Event outcome: {is_set}")

class Main():
    def run(self):
        self.shutdown_event = mp.Event()
        self.start_time = time.perf_counter()
        signal.signal(signal.SIGINT, self.handler)
        child_process = mp.Process(target=func, args=(self.shutdown_event,))
        child_process.start()
        child_process.join(timeout=5)
        print(f"Exit code: {child_process.exitcode}")

    def handler(self, signum, frame):
        elapsed_time = time.perf_counter() - self.start_time
        print(f"Main process interrupted after {elapsed_time:.3f}s")
        self.shutdown_event.set()

if __name__ == "__main__":
    mp.set_start_method("spawn")
    Main().run()

If the code is run on Windows the signal handler is only called once the join() times out:

Wait for event
Main process interrupted after 5.0430s
Exit code: None
Event outcome: True

After some digging I learned that on Windows most blocking calls aren’t woken up by an interrupt (see [Python-Dev] Interrupt thread.join() with Ctrl-C / KeyboardInterrupt on Windows).

Is there a good way to handle interrupts on Windows when using multiprocessing?

Further Context
I came across this because I am working on an application that uses a Manager to share data with a process running on a remote server. To keep the logs in one place I have an additional process running on the local machine that handles the log-messages. The goal is to shut everything down in the right sequence so no logs get lost.

pulkin (Artem Pulkin) May 1, 2025, 5:46pm 2

I would first try to join as a sanity check. Maybe join in a loop with a small timeout. Your code does work on Linux.

The issue with the code cited is not related to multiprocessing; at least not directly. But I heard that on Windows you may not indeed receive signal with the parent process. I suspect, given signal.signal(signal.SIGINT, signal.SIG_IGN) in your code, you probably were receiving the event in you child process. Does not make sense to me otherwise.

Hatsome (Alex) May 5, 2025, 5:57am 3

I tried joining with a smaller timeout than what I have in the example code above and the interrupt is handled shortly after the timeout as before. Since then I’ve also found open issues related to this Windows-specific bug (e.g. Can’t gracefully ctrl+C multiprocessing pool on Python3 & Windows · Issue #82609 · python/cpython).

The signal does reach the parent process once the blocking call releases its lock. I also tried adding a signal handler to the child process to see what happens and it leads to the same behaviour, the handler gets called once the blocking event.set() call times out.