Message 24904 - Python tracker (original) (raw)

Logged In: YES user_id=279987

Here's a reference:

http://tinyurl.com/b8mk3

The relevant post:

============================================

On 25 Feb 2001 10:48:22 GMT Casper H.S. Dik - Network Security Engineer <Casper....@holland.sun.com> wrote:

|

Solaris at various times used a cached /dev/zero fd both for mapping | thread stacks and even one for the runtime linker. | The runtime linker was mostly fine, but the thread library did have | problems with people closing fds. We since added MAP_ANON and no | longer require open("/dev/zero") . THe caaching of fds was gotten | rid of before that. | | There are valid reasons to close all fds; e.g., if you really don't | want to inherit and (you're a daemon and don't care). | | In most cases, though, the "close all" stuff performed by shells | and such at statup serves no purpose. (Other than causing more bugs

)

So the dilemma is that closing fds can cause problems and leaving them open can cause problems, when a forked child does this.
This seems to tell me that hiding fds in libraries and objects is a bad idea because processes need to know what is safe to close and/or what needs to be left open.

======================================

If the python library had some module or global list of opened file descriptors, then it would be OK to expect programs to keep those open across fork/exec. Something like:

from os import opened_fds

And then it would be no problem to skip those when closing fds.
Otherwise, your nice daemon code that deals with _urandom_fd will break later on when somebody caches another fd somewhere else in the standard library.

Also, the proposed os.daemonize() function that knows about its own fds would also work.

Still, the most robust solution is not to cache open fds in the library or perhaps catch the EBADF exception and reopen.

There are several solutions but closing this bug as invalid doesn't seem an appropriate one.