[Python-Dev] Benchmarking Python 3.3 against Python 2.7 (wide build) (original) (raw)
Steven D'Aprano steve at pearwood.info
Mon Oct 1 03:35:58 CEST 2012
- Previous message: [Python-Dev] Benchmarking Python 3.3 against Python 2.7 (wide build)
- Next message: [Python-Dev] Benchmarking Python 3.3 against Python 2.7 (wide build)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Sun, Sep 30, 2012 at 07:12:47PM -0400, Brett Cannon wrote:
> python3 perf.py -T --basedir ../benchmarks -f -b py3k ../cpython/builds/2.7-wide/bin/python ../cpython/builds/3.3/bin/python3.3
### callmethod ### Min: 0.491433 -> 0.414841: 1.18x faster Avg: 0.493640 -> 0.416564: 1.19x faster Significant (t=127.21) Stddev: 0.00170 -> 0.00162: 1.0513x smaller
I'm not sure if this is the right place to discuss this, but what is the justification for recording the average and std deviation of the benchmarks?
If the benchmarks are based on timeit, the timeit docs warn against taking any statistic other than the minimum.
-- Steven
- Previous message: [Python-Dev] Benchmarking Python 3.3 against Python 2.7 (wide build)
- Next message: [Python-Dev] Benchmarking Python 3.3 against Python 2.7 (wide build)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]