[Python-Dev] Usefulness of binary compatibility accross Python versions? (original) (raw)

Antoine Pitrou solipsis at pitrou.net
Wed Dec 20 12:30:55 EST 2017


Following this discussion, I opened two issues:

Regards

Antoine.

On Sat, 16 Dec 2017 14:22:57 +0100 Antoine Pitrou <solipsis at pitrou.net> wrote:

Hello,

Nowadays we have an official mechanism for third-party C extensions to be binary-compatible accross feature releases of Python: the stable ABI. But, for non-stable ABI-using C extensions, there are also mechanisms in place to try and ensure binary compatibility. One of them is the way in which we add tp slots to the PyTypeObject structure. Typically, when adding a tpXXX slot, you also need to add a PyTPFLAGSHAVEXXX type flag to signal those static type structures that have been compiled against a recent enough PyTypeObject definition. This way, extensions compiled against Python N-1 are supposed to "still work": as they don't have PyTPFLAGSHAVEXXX set, the core Python runtime won't try to access the (non-existing) tpXXX member. However, beside internal code complication, it means you need to add a new PyTPFLAGSHAVEXXX each time we add a slot. Since we have only 32 such bits available (many of them already taken), it is a very limited resource. Is it worth it? (*) Can an extension compiled against Python N-1 really claim to be compatible with Python N, despite other possible differences? (*) we can't extend the tpflags field to 64 bits, precisely because of the binary compatibility problem... Regards Antoine.



More information about the Python-Dev mailing list