GitHub - smarr/ReBench: Execute and document benchmarks reproducibly. (original) (raw)

ReBench: Execute and Document Benchmarks Reproducibly

Build Status PyPI version Documentation Downloads Coverage DOI

ReBench is a tool to run and document benchmark experiments. Currently, it is often used for benchmarking language implementations, but it can be used to monitor the performance of all kinds of other applications and programs, too.

The ReBench configuration format is a text format based on YAML. A configuration file defines how to build and execute a set of experiments, i.e. benchmarks. It describes which executable was used, which parameters were given to the benchmarks, and the number of iterations to be used to obtain statistically reliable results.

With this approach, the configuration contains all benchmark-specific information to reproduce a benchmark run. However, it does not capture the whole system.

The data of all benchmark runs is recorded in a data file for later analysis. Important for long-running experiments, benchmarks can be aborted and continued at a later time.

ReBench focuses on the execution aspect and does not provide advanced analysis facilities itself. Instead, the recorded results should be processed by dedicated tools such as scripts for statistical analysis in R, Python, etc, or ReBenchDB, for continuous performance tracking.

The documentation for ReBench is hosted at https://rebench.readthedocs.io/.

Goals and Features

ReBench is designed to

ReBench Denoise

Denoise configures a Linux system for benchmarking. It adapts parameters of the CPU frequency management and task scheduling to reduce some of the variability that can cause widely different benchmark results for the same experiment.

Denoise is inspired by Krun, which has many more features to carefully minimize possible interference. Krun is the tool of choice if the most reliable results are required. ReBench only adapts a subset of the parameters, while staying self-contained and minimizing external dependencies.

Non-Goals

ReBench isn't

Installation

ReBench is implemented in Python and can be installed via pip:

To reduce noise generated by the system, rebench-denoise depends on:

rebench-denoise is currently tested on Ubuntu and Rocky Linux. It is designed to degrade gracefully and report the expected implications when it cannot adapt system settings. See the docs for details.

Usage

A minimal configuration file looks like this:

this run definition will be chosen if no parameters are given to rebench

default_experiment: all default_data_file: 'example.data'

a set of suites with different benchmarks and possibly different settings

benchmark_suites: ExampleSuite: gauge_adapter: RebenchLog command: Harness %(benchmark)s %(input)s %(variable)s input_sizes: [2, 10] variable_values: - val1 benchmarks: - Bench1 - Bench2

a set of executables for the benchmark execution

executors: MyBin1: path: bin executable: test-vm1.py %(cores)s cores: [1] MyBin2: path: bin executable: test-vm2.py

combining benchmark suites and executions

experiments: Example: suites: - ExampleSuite executions: - MyBin1 - MyBin2

Saved as test.conf, this configuration could be executed with ReBench as follows:

See the documentation for details: https://rebench.readthedocs.io/.

Support and Contributions

In case you encounter issues, please feel free to open an issueso that we can help.

For contributions, we use pull requests. For larger contributions, it is likely useful to discuss them upfront in an issue first.

Development Setup

For the development setup, the currently recommended way is to use pip install --editable . in the root directory of the repository. You may also want to use a virtual environment to avoid conflicts with other Python packages.

For instance:

git clone https://github.com/smarr/rebench.git cd rebench pip install --editable .

Unit tests and linting can be run with:

python -m pytest python -m pylint rebench

Use in Academia

If you use ReBench for research and in academic publications, please consider citing it.

The preferred citation is:

@misc{ReBench:2025, author = {Marr, Stefan}, doi = {10.5281/zenodo.1311762}, month = {February}, note = {Version 1.3}, publisher = {GitHub}, title = {ReBench: Execute and Document Benchmarks Reproducibly}, year = 2025 }

Some publications that have been using ReBench include: