[Python-Dev] Trial balloon: microthreads library in stdlib (original) (raw)

"Martin v. Löwis" martin at v.loewis.de
Tue Feb 13 16:50:35 CET 2007


Richard Tew schrieb:

If there is no Stackless 'hard switching' available (the stack switching done with assistance of assembler) for the platform it is being compiled on, then compilation proceeds without Stackless compiled into Python. The module is not available. The process of adding support for additional platforms is rather straightforward and with access to them I can do it if required.

I don't think portability to all systems should be necessary. Rather, there should be a clear demarcation as to what works and what not, and that demarcation should be fully understood so that anybody using it could know what the restrictions are.

I don't know about SEH but I believe support for SPARC was added in 2002. http://svn.python.org/view/stackless/trunk/Stackless/platf/switchsparcsungcc.h

My concern here is more of the kind "what about the next processor?" Is there some uniform way of dealing with it, if not, is there an explicit list of architectures supported? Is there, or could there be, an automated test telling whether the implementation will work?

Right. I imagine that if there are still problems like these, they will be brought to light when and if patches are submitted. But for now I will address what you mention just so that people don't assume they are still the case, especially since Stackless has been rewritten since then.

This also contributes to misunderstanding. When is "then"? I'm talking about the "new" implementation, the one that bases on setjmp and longjmp, and copies slices of the stack around. It has been rewritten after that implementation strategy was implemented?

Perhaps my pursuit of better support for asynchronous calls has led to some confusion. Stackless is an implementation with minimal changes to the core. It does not include any modules which monkey patch.

That's good.

It was considered and quickly dismissed to modify the socket and file support in the core at C level to do what was needed. Primarily because it would complicate the limited changes we make to the core.

This is also something to consider: "minimize changes to the core" should not be a goal when contributing to the core (it certainly is when you maintain a fork). Feel free to make any changes you like, completely deviating from the current code base if necessary, as long as a) the resulting code still works in all cases where the old code did (if there are changes to the semantics, they need to be discussed), and b) the resulting code is still as maintainable and readable (or better in these respects) as the old code.

This misconception is what kicked out the first attempt to merge setuptools: this was apparently written in a way to minimize changes to the core, and also to preserve semantics unmodified in all details. By doing so, it became less maintainable than if it had taken the liberty to change things.

Of course, then you have two versions to maintain: the in-core version that nicely integrates, and the out-of-core version, that has minimal changes. It first takes effort to create the version to contribute, and then also the trick is how to keep both of them maintainable, ideally from a single source.

Regards, Martin



More information about the Python-Dev mailing list