[Python-Dev] Micro-benchmarks for function calls (PEP 576/579/580) (original) (raw)

Victor Stinner vstinner at redhat.com
Tue Jul 10 18:50:58 EDT 2018


The pyperformance benchmark suite had micro benchmarks on function calls, but I removed them because they were sending the wrong signal. A function call by itself doesn't matter to compare two versions of CPython, or CPython to PyPy. It's also very hard to measure the cost of a function call when you are using a JIT compiler which is able to inline the code into the caller... So I removed all these stupid "micro benchmarks" to a dedicated Git repository: https://github.com/vstinner/pymicrobench

Sometimes, I add new micro benchmarks when I work on one specific micro optimization.

But more generally, I suggest you to not run micro benchmarks and avoid micro optimizations :-)

Victor

2018-07-10 0:20 GMT+02:00 Jeroen Demeyer <J.Demeyer at ugent.be>:

Here is an initial version of a micro-benchmark for C function calling:

https://github.com/jdemeyer/callbench I don't have results yet, since I'm struggling to find the right options to "perf timeit" to get a stable result. If somebody knows how to do this, help is welcome.

Jeroen.


Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com



More information about the Python-Dev mailing list