[Python-Dev] Python Benchmarks (original) (raw)
Fredrik Lundh fredrik at pythonware.com
Fri Jun 2 05:10:38 CEST 2006
- Previous message: [Python-Dev] feature request: inspect.isgenerator
- Next message: [Python-Dev] Python Benchmarks
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
M.-A. Lemburg wrote:
Seriously, I've been using and running pybench for years and even though tweaks to the interpreter do sometimes result in speedups or slow-downs where you wouldn't expect them (due to the interpreter using the Python objects), they are reproducable and often enough have uncovered that optimizations in one area may well result in slow-downs in other areas.
Often enough the results are related to low-level features of the architecture you're using to run the code such as cache size, cache lines, number of registers in the CPU or on the FPU stack, etc. etc.
and that observation has never made you stop and think about whether there might be some problem with the benchmarking approach you're using? after all, if a change to e.g. the try/except code slows things down or speed things up, is it really reasonable to expect that the time it takes to convert Unicode strings to uppercase should suddenly change due to cache effects or a changing number of registers in the CPU? real hardware doesn't work that way...
is PyBench perhaps using the following approach:
T = set of tests
for N in range(number of test runs):
for t in T:
t0 = get_process_time()
t()
t1 = get_process_time()
assign t1 - t0 to test t
print assigned time
where t1 - t0 is very short?
that's not a very good idea, given how get_process_time tends to be implemented on current-era systems (google for "jiffies")... but it definitely explains the bogus subtest results I'm seeing, and the "magic hardware" behaviour you're seeing.
- Previous message: [Python-Dev] feature request: inspect.isgenerator
- Next message: [Python-Dev] Python Benchmarks
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]