GitHub - ionelmc/pytest-benchmark: pytest fixture for benchmarking code (original) (raw)

Overview

docs Documentation Status Join the chat at https://gitter.im/ionelmc/pytest-benchmark
tests GitHub Actions Build Status Coverage Status Coverage Status
package PyPI Package latest release PyPI Wheel Supported versions Supported implementations Commits since latest release

A pytest fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.

See calibration and FAQ.

Installation

pip install pytest-benchmark

Documentation

For latest release: pytest-benchmark.readthedocs.org/en/stable.

For master branch (may include documentation fixes): pytest-benchmark.readthedocs.io/en/latest.

Examples

But first, a prologue:

This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first. Take a look at the introductory materialor watch talks.

Few notes:

This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.

Example:

def something(duration=0.000001): """ Function that needs some serious benchmarking. """ time.sleep(duration) # You may return anything you want, like the result of a computation return 123

def test_my_stuff(benchmark): # benchmark something result = benchmark(something)

# Extra code, to verify that the run completed correctly.
# Sometimes you may want to check the result, fast functions
# are no good if they return incorrect results :-)
assert result == 123

You can also pass extra arguments:

def test_my_stuff(benchmark): benchmark(time.sleep, 0.02)

Or even keyword arguments:

def test_my_stuff(benchmark): benchmark(time.sleep, duration=0.02)

Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

def test_my_stuff(benchmark): @benchmark def something(): # unnecessary function call time.sleep(0.000001)

A better way is to just benchmark the final function:

def test_my_stuff(benchmark): benchmark(time.sleep, 0.000001) # way more accurate results!

If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there's a special mode - pedantic:

def my_special_setup(): ...

def test_with_setup(benchmark): benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)

Screenshots

Normal run:

Screenshot of pytest summary

Compare mode (--benchmark-compare):

Screenshot of pytest summary in compare mode

Histogram (--benchmark-histogram):

Histogram sample

Also, it has nice tooltips.

Development

To run the all tests run:

tox

Credits