[Python-Dev] Benchmark results across all major Python implementations (original) (raw)

R. David Murray rdmurray at bitdance.com
Tue Nov 17 13:40:32 EST 2015


On Mon, 16 Nov 2015 23:37:06 +0000, "Stewart, David C" <david.c.stewart at intel.com> wrote:

Last June we started publishing a daily performance report of the latest Python tip against the previous day's run and some established synch point. We mail these to the community to act as a "canary in the coal mine." I wrote about it at https://01.org/lp/blog/0-day-challenge-what-pulse-internet

You can see our manager-style dashboard of a couple of key workloads at http://languagesperformance.intel.com/ (I have this running constantly on a dedicated screen in my office).

Just took a look at this. Pretty cool. The web page is a bit confusing, though. It doesn't give any clue as to what is being measured by the numbers presented...it isn't obvious whether those downward sloping lines represent progress or regression. Also, the intro talks about historical data, but other than the older dates[*] in the graph there's no access to it. Do you have plans to provide access to the raw data? It also doesn't show all of the test shown in the example email in your blog post or the emails to python-checkins...do you plan to make those graphs available in the future as well?

Also, in the emails, what is the PGO column percentage relative to?

I suppose that for this to have maximum effect someone would have to specifically be paying attention to performance and figuring out why every (real) regression happened. I don't suppose we have anyone in the community currently who is taking on that role, though we certainly do have people who are interested in Python performance :)

--David

[*] Personally I'd find it easier to read those dates in MM-DD form, but I suppose that's a US quirk, since in the US when using slashes the month comes first...



More information about the Python-Dev mailing list