[Python-Dev] Optimize Python long integers (original) (raw)

Ondrej Certik ondrej at certik.cz
Thu Nov 13 15:53:05 CET 2008


On Tue, Nov 11, 2008 at 11:40 PM, Thomas Wouters <thomas at python.org> wrote:

On Tue, Nov 11, 2008 at 14:25, Victor Stinner <victor.stinner at haypocalc.com> wrote:

There are some very interesting propositions (with patches!) to optimize Python int and long types (especially the long integers). Here's another one: http://code.python.org/loggerhead/users/twouters/intopt -- integer inlining through pointer tagging trickery. In Python 2.6 it costs 2-4% overall performance but increases integer arithmetic (in the range [-0x40000000, 0x40000000) only) by 10-20% according to my rough measurements (I haven't tried your benchmark yet.) I haven't ported it to 3.0 but it should provide a bigger win there. It also breaks API compatibility in a few ways: PyTYPE(o) and PyREFCNT(o) are no longer valid lvalues, and they and PyINCREF(o) and PyDECREF(o) may all evaluate 'o' twice. And, worst of all, it exposes the tagged pointers to third-party extensions, so anything not doing typechecks with PyTYPE(o) will likely cause buserrors. In retrospect, perhaps this is too controversial to be added to the list ;-) I don't really expect this to be something CPython would want to use as-is, although there may be use for tagged pointers in more controlled environments (like function locals.)

You might also try sympy (http://code.google.com/p/sympy/). Calculates 10**5 decima digits of pi:

pure Python integers:

$ MPMATH_NOGMPY=yes ipython In [1]: from sympy import pi

In [2]: %time a=pi.evalf(10**5) CPU times: user 8.56 s, sys: 0.00 s, total: 8.56 s Wall time: 8.56 s

gmpy integers:

$ ipython In [1]: from sympy import pi

In [2]: %time a=pi.evalf(10**5) CPU times: user 0.28 s, sys: 0.00 s, total: 0.28 s Wall time: 0.28 s

So with gmpy, it is 30x faster.

Ondrej



More information about the Python-Dev mailing list