Usage — Python Performance Benchmark Suite 1.0.6 documentation (original) (raw)

Installation

Command to install pyperformance:

python3 -m pip install pyperformance

The command installs a new pyperformance program.

If needed, pyperf and six dependencies are installed automatically.

pyperformance works on Python 3.6 and newer, but it may work on Python 3.4 and 3.5.

At runtime, Python development files (header files) may be needed to install some dependencies like dulwich_log or psutil, to build their C extension. Commands on Fedora to install dependencies:

Windows notes

On Windows, to allow pyperformance to build dependencies from source like greenlet, dulwich or psutil, if you want to use apython.exe built from source, you should not use the python.exedirectly. Instead, you must run the little-known command PC\layoutto create a filesystem layout that resembles an installed Python:

.\python.bat -m PC.layout --preset-default --copy installed -v

(Use the --help flag for more info about PC\layout.)

Now you can use the “installed” Python executable:

installed\python.exe -m pip install pyperformance installed\python.exe -m pyperformance run ...

Using an actually installed Python executable (e.g. via py) works fine too.

Run benchmarks

Commands to compare Python 3.6 and Python 3.7 performance:

pyperformance run --python=python3.6 -o py36.json pyperformance run --python=python3.7 -o py37.json pyperformance compare py36.json py37.json

Note: python3 -m pyperformance ... syntax works as well (ex: python3 -m pyperformance run -o py37.json), but requires to install pyperformance on each tested Python version.

JSON files are produced by the pyperf module and so can be analyzed using pyperf commands:

python3 -m pyperf show py36.json python3 -m pyperf check py36.json python3 -m pyperf metadata py36.json python3 -m pyperf stats py36.json python3 -m pyperf hist py36.json python3 -m pyperf dump py36.json (...)

It’s also possible to use pyperf to compare results of two JSON files:

python3 -m pyperf compare_to py36.json py37.json --table

Basic commands

pyperformance actions:

run Run benchmarks on the running python show Display a benchmark file compare Compare two benchmark files list List benchmarks of the running Python list_groups List benchmark groups of the running Python venv Actions on the virtual environment

Common options

Options available to all commands:

-h, --help show this help message and exit

run

Run benchmarks on the running python.

Usage:

pyperformance run [-h] [-r] [-f] [--debug-single-value] [-v] [-m] [--affinity CPU_LIST] [-o FILENAME] [--append FILENAME] [--manifest MANIFEST] [--timeout TIMEOUT] [-b BM_LIST] [--inherit-environ VAR_LIST] [-p PYTHON] [--hook HOOK]

options:

-h, --help show this help message and exit -r, --rigorous Spend longer running tests to get more accurate results -f, --fast Get rough answers quickly --debug-single-value Debug: fastest mode, only compute a single value -v, --verbose Print more output -m, --track-memory Track memory usage. This only works on Linux. --affinity CPU_LIST Specify CPU affinity for benchmark runs. This way, benchmarks can be forced to run on a given CPU to minimize run to run variation. -o FILENAME, --output FILENAME Run the benchmarks on only one interpreter and write benchmark into FILENAME. Provide only baseline_python, not changed_python. --append FILENAME Add runs to an existing file, or create it if it doesn't exist --timeout TIMEOUT Specify a timeout in seconds for a single benchmark run (default: disabled) --manifest MANIFEST benchmark manifest file to use -b BM_LIST, --benchmarks BM_LIST Comma-separated list of benchmarks to run. Can contain both positive and negative arguments: --benchmarks=run_this,also_this,-not_this. If there are no positive arguments, we'll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) --same-loops SAME_LOOPS Use the same number of loops as a previous run (i.e., don't recalibrate). Should be a path to a .json file from a previous run. --hook HOOK Apply the given pyperf hook when running the benchmarks.

show

Display a benchmark file.

Usage:

positional arguments:

compare

Compare two benchmark files.

Usage:

pyperformance compare [-h] [-v] [-O STYLE] [--csv CSV_FILE] [--inherit-environ VAR_LIST] [-p PYTHON] baseline_file.json changed_file.json

positional arguments:

baseline_file.json changed_file.json

options:

-v, --verbose Print more output -O STYLE, --output_style STYLE What style the benchmark output should take. Valid options are 'normal' and 'table'. Default is normal. --csv CSV_FILE Name of a file the results will be written to, as a three-column CSV file containing minimum runtimes for each benchmark. --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python)

list

List benchmarks of the running Python.

Usage:

pyperformance list [-h] [--manifest MANIFEST] [-b BM_LIST] [--inherit-environ VAR_LIST] [-p PYTHON]

options:

--manifest MANIFEST benchmark manifest file to use -b BM_LIST, --benchmarks BM_LIST Comma-separated list of benchmarks to run. Can contain both positive and negative arguments: --benchmarks=run_this,also_this,-not_this. If there are no positive arguments, we'll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python)

Use python3 -m pyperformance list -b all to list all benchmarks.

list_groups

List benchmark groups of the running Python.

Usage:

pyperformance list_groups [-h] [--manifest MANIFEST] [--inherit-environ VAR_LIST] [-p PYTHON]

options:

--manifest MANIFEST benchmark manifest file to use --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python)

venv

Actions on the virtual environment.

Actions:

show Display the path to the virtual environment and its status (created or not) create Create the virtual environment recreate Force the recreation of the the virtual environment remove Remove the virtual environment

Common options:

--venv VENV Path to the virtual environment --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python)

venv show

Display the path to the virtual environment and its status (created or not).

Usage:

pyperformance venv show [-h] [--venv VENV] [--inherit-environ VAR_LIST] [-p PYTHON]

venv create

Create the virtual environment.

Usage:

pyperformance venv create [-h] [--venv VENV] [--manifest MANIFEST] [-b BM_LIST] [--inherit-environ VAR_LIST] [-p PYTHON]

options:

--manifest MANIFEST benchmark manifest file to use -b BM_LIST, --benchmarks BM_LIST Comma-separated list of benchmarks to run. Can contain both positive and negative arguments: --benchmarks=run_this,also_this,-not_this. If there are no positive arguments, we'll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments.

venv recreate

Force the recreation of the the virtual environment.

Usage:

pyperformance venv recreate [-h] [--venv VENV] [--manifest MANIFEST] [-b BM_LIST] [--inherit-environ VAR_LIST] [-p PYTHON]

options:

--manifest MANIFEST benchmark manifest file to use -b BM_LIST, --benchmarks BM_LIST Comma-separated list of benchmarks to run. Can contain both positive and negative arguments: --benchmarks=run_this,also_this,-not_this. If there are no positive arguments, we'll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments.

venv remove

Remove the virtual environment.

Usage:

pyperformance venv remove [-h] [--venv VENV] [--inherit-environ VAR_LIST] [-p PYTHON]

Compile Python to run benchmarks

pyperformance actions:

compile Compile and install CPython and run benchmarks on installed Python compile_all Compile and install CPython and run benchmarks on installed Python on all branches and revisions of CONFIG_FILE upload Upload JSON results to a Codespeed website

All these commands require a configuration file.

Simple configuration usable for compile (but not for compile_all norupload), doc/benchmark.conf:

[config] json_dir = ~/prog/python/bench_json

[scm] repo_dir = ~/prog/python/master update = True

[compile] bench_dir = ~/prog/python/bench_tmpdir

[run_benchmark] system_tune = True affinity = 2,3

Configuration file sample with comments, doc/benchmark.conf.sample:

[config]

Directory where JSON files are written.

- uploaded files are moved to json_dir/uploaded/

- results of patched Python are written into json_dir/patch/

json_dir = ~/json

If True, compile CPython in debug mode (LTO and PGO disabled),

run benchmarks with --debug-single-sample, and disable upload.

Use this option to quickly test a configuration.

debug = False

[scm]

Directory of CPython source code (Git repository)

repo_dir = ~/cpython

Update the Git repository (git fetch)?

update = True

Name of the Git remote, used to create revision of

the Git branch. For example, use revision 'remotes/origin/3.6'

for the branch '3.6'.

git_remote = remotes/origin

[compile]

Create files into bench_dir:

- bench_dir/bench-xxx.log

- bench_dir/prefix/: where Python is installed

- bench_dir/venv/: Virtual environment used by pyperformance

bench_dir = ~/bench_tmpdir

Link Time Optimization (LTO)?

lto = True

Profiled Guided Optimization (PGO)?

pgo = True

Build the experimental just-in-time (JIT) compiler?

Possible values are:

- no: (default) do not build the JIT or the micro-op interpreter.

The new PYTHON_JIT environment variable has no effect.

- yes: build the JIT and enable it by default. PYTHON_JIT=0 can be used to

disable it at runtime.

- yes-off: build the JIT, but do not enable it by default. PYTHON_JIT=1 can

be used to enable it at runtime.

- interpreter: do not build the JIT, but do build and enable the micro-op

interpreter. This is useful for those of us who find ourselves developing

or debugging micro-ops (but don’t want to deal with the JIT).

PYTHON_JIT=0 can be used to disable the micro-op interpreter at runtime.

jit = no

The space-separated list of libraries that are package-only,

i.e., locally installed but not on header and library paths.

For each such library, determine the install path and add an

appropriate subpath to CFLAGS and LDFLAGS declarations passed

to configure. As an exception, the prefix for openssl, if that

library is present here, is passed via the --with-openssl

option. Currently, this only works with Homebrew on macOS.

If running on macOS with Homebrew, you probably want to use:

pkg_only = openssl readline sqlite3 xz zlib

The version of zlib shipping with macOS probably works as well,

as long as Apple's SDK headers are installed.

pkg_only =

Install Python? If false, run Python from the build directory

WARNING: Running Python from the build directory introduces subtle changes

compared to running an installed Python. Moreover, creating a virtual

environment using a Python run from the build directory fails in many cases,

especially on Python older than 3.4. Only disable installation if you

really understand what you are doing!

install = True

Specify '-j' parameter in 'make' command

jobs = 8

[run_benchmark]

Run "sudo python3 -m pyperf system tune" before running benchmarks?

system_tune = True

--manifest option for 'pyperformance run'

manifest =

--benchmarks option for 'pyperformance run'

benchmarks =

--affinity option for 'pyperf system tune' and 'pyperformance run'

affinity =

Upload generated JSON file?

Upload is disabled on patched Python, in debug mode or if install is

disabled.

upload = False

Configuration to upload results to a Codespeed website

[upload] url = environment = executable = project =

[compile_all]

List of CPython Git branches

branches = default 3.6 3.5 2.7

List of revisions to benchmark by compile_all

[compile_all_revisions]

list of 'sha1=' (default branch: 'master') or 'sha1=branch'

used by the "pyperformance compile_all" command

e.g.:

11159d2c9d6616497ef4cc62953a5c3cc8454afb =

compile

Compile Python, install Python and run benchmarks on the installed Python.

Usage:

pyperformance compile [-h] [--patch PATCH] [-U] [-T] [--inherit-environ VAR_LIST] [-p PYTHON] config_file revision [branch]

positional arguments:

config_file Configuration filename revision Python benchmarked revision branch Git branch

options:

--patch PATCH Patch file -U, --no-update Don't update the Git repository -T, --no-tune Don't run 'pyperf system tune' to tune the system for benchmarks --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python)

Notes:

compile_all

Compile all branches and revisions of CONFIG_FILE.

Usage:

pyperformance compile_all [-h] [--inherit-environ VAR_LIST] [-p PYTHON] config_file

positional arguments:

config_file Configuration filename

options:

--inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python)

upload

Upload results from a JSON file to a Codespeed website.

Usage:

pyperformance upload [-h] [--inherit-environ VAR_LIST] [-p PYTHON] config_file json_file

positional arguments:

config_file Configuration filename json_file JSON filename

options:

--inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python)

How to get stable benchmarks

pyperformance virtual environment

To run benchmarks, pyperformance first creates a virtual environment. It installs requirements with fixed versions to get a reproductible environment. The system Python has unknown module installed with unknown versions, and can have.pth files run at Python startup which can modify Python behaviour or at least slow down Python startup.

What is the goal of pyperformance

A benchmark is always written for a specific purpose. Depending how the benchmark is written and how the benchmark is run, the result can be different and so have a different meaning.

The pyperformance benchmark suite has multiple goals:

Don’t disable GC nor ASLR

The pyperf module and pyperformance benchmarks are designed to produce reproductible results, but not at the price of running benchmarks in a special mode which would not be used to run applications in production. For these reasons, the Python garbage collector, Python randomized hash function and system ASLR (Address Space Layout Randomization) are not disabled. Benchmarks don’t call gc.collect() neither since CPython implements it withstop-the-worldand so applications don’t call it to not kill performances.

Include outliers and spikes

Moreover, while the pyperf documentation explains how to reduce the random noise of the system and other applications, some benchmarks use the system and so can get different timing depending on the system workload, depending on I/O performances, etc. Outliers and temporary spikes in results are not automatically removed: values are summarized by computing the average (arithmetic mean) and standard deviation which “contains” these spikes, instead of using median and the median absolute deviation for example which to ignore outliers. It is deliberate choice since applications running in production are impacted by such temporary slowdown caused by various things like a garbage collection or a JIT compilation.

Warmups and steady state

A borderline issue are the benchmarks “warmups”. The first values of each worker process are always slower: 10% slower in the best case, it can be 1000% slower or more on PyPy. Right now (2017-04-14), pyperformance ignore first values considered as warmup until a benchmark reachs its “steady state”. The “steady state” can include temporary spikes every 5 values (ex: caused by the garbage collector), and it can still imply further JIT compiler optimizations but with a “low” impact on the average pyperformance.

To be clear “warmup” and “steady state” are a work-in-progress and a very complex topic, especially on PyPy and its JIT compiler.

Notes

Tool for comparing the performance of two Python implementations.

pyperformance will run Student’s two-tailed T test on the benchmark results at the 95% confidence level to indicate whether the observed difference is statistically significant.

Omitting the -b option will result in the default group of benchmarks being run Omitting -b is the same as specifying -b default.

To run every benchmark pyperformance knows about, use -b all. To see a full list of all available benchmarks, use –help.

Negative benchmarks specifications are also supported: -b -2to3 will run every benchmark in the default group except for 2to3 (this is the same as-b default,-2to3). -b all,-django will run all benchmarks except the Django templates benchmark. Negative groups (e.g., -b -default) are not supported. Positive benchmarks are parsed before the negative benchmarks are subtracted.

If --track_memory is passed, pyperformance will continuously sample the benchmark’s memory usage. This currently only works on Linux 2.6.16 and higher or Windows with PyWin32. Because --track_memory introduces performance jitter while collecting memory measurements, only memory usage is reported in the final report.