[Python-Dev] performance testing recommendations in devguide (original) (raw)

Carlos Nepomuceno carlosnepomuceno at outlook.com
Wed May 29 20:59:21 CEST 2013



Date: Wed, 29 May 2013 12:00:44 -0600 From: ericsnowcurrently at gmail.com To: python-dev at python.org Subject: [Python-Dev] performance testing recommendations in devguide

The devguide doesn't have anything on performance testing that I could find. We do have a number of relatively useful resources in this space though, like pybench and (eventually) speed.python.org. I'd like to add a page to the devguide on performance testing, including an explanation of our performance goals, how to test for them, and what tools are available.

Thanks Eric! I was looking for that kind of place! ;)

Tools I'm aware of: * pybench (relatively limited in real-world usefulness) * timeit module (for quick comparisions) * benchmarks repo (real-world performance test suite) * speed.python.org (would omit for now)

Why PyBench isn't considered reliable[1]?

What do you mean by "benchmarks repo"? http://hg.python.org/benchmarks ?

Things to test: * speed * memory (tools? tests?)

Critically sensitive performance subjects * interpreter start-up time * module import overhead * attribute lookup overhead (including MRO traversal) * function call overhead * instance creation overhead * dict performance (the underlying namespace type) * tuple performance (packing/unpacking, integral container type) * string performance What would be important to say in the devguide regarding Python performance and testing it?

I've just discovered insertion at the end is faster than at the start of a list. I'd like to see things like that not only in the devguide but also in the docs (http://docs.python.org/). I found it on Dan's presentation[2] but I'm not sure it isn't in the docs somewhere.

What would you add/subtract from the above?

Threading performance!

How important is testing memory performance? How do we avoid performance regressions? Thanks!

Testing and making it faster! ;)

Offcourse we need a baseline (benchmarks database) to compare and check improvements.

-eric

[1] "pybench - run the standard Python PyBench benchmark suite. This is considered an unreliable, unrepresentative benchmark; do not base decisions off it. It is included only for completeness." Source: http://hg.python.org/benchmarks/file/dccd52b95a71/README.txt

[2] http://stromberg.dnsalias.org/~dstromberg/Intro-to-Python/Intro%20to%20Python%202010.pdf



More information about the Python-Dev mailing list