[Python-Dev] Thread-safe file objects, the return (original) (raw)

Antoine Pitrou solipsis at pitrou.net
Wed Apr 2 12:17:48 CEST 2008


Guido van Rossum <guido python.org> writes:

Your solution (a counter) seems fine except I think perhaps the close() call should not raise IOError -- instead, it should set a flag so that the thread that makes the counter go to zero can close the thread (after all the file got closed while it was being used).

I agree with Gregory: we should be explicit about what happens. I wonder what we would gain from that approach - apart from encouraging dangerous coding practices :) It also depends how far we want to go: I am merely proposing to fix the crashes, do we want to provide a "smarter" close() variation that does what you suggest for people that want (or need) to take the risk?

There are of course other concurrency issues besides close -- what if two threads both try to do I/O on the file? What will the C stdio library do in that case? Are stdio files thread-safe at the C level?

According to the glibc documentation, at http://www.gnu.org/software/libc/manual/html_node/Streams-and-Threads.html :

« The POSIX standard requires that by default the stream operations are atomic. I.e., issuing two stream operations for the same stream in two threads at the same time will cause the operations to be executed as if they were issued sequentially. The buffer operations performed while reading or writing are protected from other uses of the same stream. To do this each stream has an internal lock object which has to be (implicitly) acquired before any work can be done. »

So according to POSIX rules it should be perfectly safe. In any case, someone would have to try my patch under Windows and OS X and see if test_file.py passes without crashing.

Regards

Antoine.



More information about the Python-Dev mailing list