πŸ“– Quick Start (original) (raw)


What is Bencher?

Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.

For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!

Install bencher CLI

Select your operating system and run the provided command to install the bencher CLI. For more details, see the bencher CLI install documentation.

Terminal window


curl --proto '=https' --tlsv1.2 -sSfL https://bencher.dev/download/install-cli.sh | sh

Terminal window


curl --proto '=https' --tlsv1.2 -sSfL https://bencher.dev/download/install-cli.sh | sh

Terminal window


cargo install --git https://github.com/bencherdev/bencher --branch main --locked --force bencher_cli

Terminal window


powershell -c "irm https://bencher.dev/download/install-cli.ps1 | iex"

LinuxmacOSWindowsOther

Terminal window


β €

Now, lets check that you have the bencher CLI installed. Run:

Terminal window


bencher --version

Terminal window


bencher --version

Terminal window


bencher --version

You should see:

Select your Benchmark Harness

If you already have benchmarks written, select your programming language and benchmarking harness from the list below. Otherwise, just skip this step. For more details, see the benchmark harness adapters documentation.

C#

C++

Go

Java

JavaScript

Python

Ruby

Rust

Shell

JSON

Track your Benchmarks

You are now ready to track your benchmark results! To do so, you will use the bencher run CLI subcommandto run your benchmarks and collect the results. Run:

Terminal window


bencher run "make benchmarks"

Terminal window


bencher run "make benchmarks"

Terminal window


bencher run "make benchmarks --benchmark_format=json"

Terminal window


bencher run "make benchmarks --benchmark_format=json"

Terminal window


bencher run "dotnet run -c Release"

Terminal window


bencher run "dotnet run -c Release"

Terminal window


β €

Terminal window


bencher run "go test -bench"

Terminal window


bencher run "go test -bench"

Terminal window


bencher run --file results.json "java -jar benchmarks.jar -rf json -rff results.json"

Terminal window


bencher run --file results.json "java -jar benchmarks.jar -rf json -rff results.json"

Terminal window


bencher run "node benchmark.js"

Terminal window


bencher run "node benchmark.js"

Terminal window


bencher run "node benchmark.js"

Terminal window


bencher run "node benchmark.js"

Terminal window


bencher run "bencher mock"

Terminal window


bencher run "bencher mock"

Terminal window


bencher run "asv run"

Terminal window


bencher run "asv run"

Terminal window


bencher run --file results.json "pytest --benchmark-json results.json benchmarks.py"

Terminal window


bencher run --file results.json "pytest --benchmark-json results.json benchmarks.py"

Terminal window


bencher run "ruby benchmarks.rb"

Terminal window


bencher run "ruby benchmarks.rb"

Terminal window


bencher run "cargo +nightly bench"

Terminal window


bencher run "cargo +nightly bench"

Terminal window


bencher run "cargo bench"

Terminal window


bencher run "cargo bench"

Terminal window


bencher run "cargo bench"

Terminal window


bencher run "cargo bench"

Terminal window


bencher run "cargo bench"

Terminal window


bencher run "cargo bench"

Terminal window


bencher run --file results.json "hyperfine --export-json results.json 'sleep 0.1'"

Terminal window


bencher run --file results.json "hyperfine --export-json results.json 'sleep 0.1'"

You may need to modify the benchmark command to match your setup. If you don’t have any benchmarks yet, you can just use the bencher mock subcommand as your benchmark command to generate some mock data. If everything works as expected, the end of the output should look something like this:


View results:

- bencher::mock_0 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=f7022024-ae16-4782-8f0d-869d65a82930&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54

- bencher::mock_1 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=7a823440-216f-482d-a05f-8bf75e865bba&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54

- bencher::mock_2 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=8d9695ff-f352-4781-9561-3c69012fd9fe&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54

- bencher::mock_3 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=8ef6e256-8084-4afe-a7cf-eaa46384c19d&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54

- bencher::mock_4 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=1205e35a-c73b-4ff9-916c-40838a62ae0b&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54

Claim this project: https://bencher.dev/auth/signup?claim=d4b0cd5a-8422-40af-9872-8e18d5d062c4

You can now view the results for each of your benchmarks in the browser. Click or copy and paste the links from View results. To claim these results, click or copy and paste the Claim this project link into your browser.

🐰 Congrats! You tracked your first benchmark results! πŸŽ‰

Keep Going: How to Claim Benchmark Results ➑

βœ… You are signed up for Bencher Cloud

Published: Sat, August 12, 2023 at 9:07:00 PM UTC | Last Updated: Sun, April 6, 2025 at 6:25:00 PM UTC