[Python-Dev] Usage of the multiprocessing API and object lifetime (original) (raw)

Antoine Pitrou solipsis at pitrou.net
Tue Dec 11 10:10:51 EST 2018


Hi,

On Tue, 11 Dec 2018 15:21:31 +0100 Victor Stinner <vstinner at redhat.com> wrote:

Pablo's issue35378 evolved to add a weak reference in iterators to try to detect when the Pool is destroyed: raise an exception from the iterator, if possible.

That's an ok fix for me.

By the way, I'm surprised that "with pool:" doesn't release all resources.

That's not a problem, as long as the destructor does release resources.

From a technical point of view, I would prefer to become stricter.

Using "with pool:" is fine, we shouldn't start raising a warning for it.

What you are proposing here starts to smell like an anti-pattern to me. Python is a garbage-collected language, so by definition, there are going to be resources that are automatically collected when an object disappears. If I'm allocating a 2GB bytes object, then PyPy may delay the deallocation much longer than CPython. Do you propose we add a release() method to bytes objects to avoid this issue (and emit a warning for people who don't call release() on bytes objects)?

You can't change the language's philosophy. We warn about open files because those have user-visible consequences (such as unflushed buffers, or not being able to delete the file on Windows). If there is no user-visible consequence to not calling join() on a Pool, then we shouldn't warn about it.

Regards

Antoine.



More information about the Python-Dev mailing list