Reference · BenchmarkTools.jl (original) (raw)

References

BenchmarkTools.clear_empty! — Method

clear_empty!(group::BenchmarkGroup)

Recursively remove any empty subgroups from group.

Use this to prune a BenchmarkGroup after accessing the incorrect fields, such as g=BenchmarkGroup(); g[1], without storing anything to g[1], which will create an empty subgroup g[1].

source

[BenchmarkTools.judge](#BenchmarkTools.judge-Tuple{BenchmarkGroup, Vararg{BenchmarkGroup}}) — Method

judge(target::BenchmarkGroup, baseline::BenchmarkGroup; [time_tolerance::Float64=0.05])

source

[BenchmarkTools.judge](#BenchmarkTools.judge-Tuple{BenchmarkTools.TrialEstimate, BenchmarkTools.TrialEstimate}) — Method

judge(target::TrialEstimate, baseline::TrialEstimate; [time_tolerance::Float64=0.05])

Report on whether the first estimate target represents a regression or an improvement with respect to the second estimate baseline.

source

BenchmarkTools.judge — Method

judge(r::TrialRatio, [time_tolerance::Float64=0.05])

source

[BenchmarkTools.ratio](#BenchmarkTools.ratio-Tuple{BenchmarkTools.TrialEstimate, BenchmarkTools.TrialEstimate}) — Method

ratio(target::TrialEstimate, baseline::TrialEstimate)

Returns a ratio of the target estimate to the baseline estimate, as e.g. time(target)/time(baseline).

source

BenchmarkTools.tune! — Function

tune!(b::Benchmark, p::Parameters = b.params; verbose::Bool = false, pad = "", kwargs...)

Tune a Benchmark instance.

If the number of evals in the parameters p has been set manually, this function does nothing.

source

BenchmarkTools.tune! — Method

tune!(group::BenchmarkGroup; verbose::Bool = false, pad = "", kwargs...)

Tune a BenchmarkGroup instance. For most benchmarks, tune! needs to perform many evaluations to determine the proper parameters for any given benchmark - often more evaluations than are performed when running a trial. In fact, the majority of total benchmarking time is usually spent tuning parameters, rather than actually running trials.

source

BenchmarkTools.@ballocated — Macro

@ballocated expression [other parameters...]

Similar to the @allocated macro included with Julia, this returns the number of bytes allocated when executing a given expression. It uses the @benchmark macro, however, and accepts all of the same additional parameters as @benchmark. The returned allocations correspond to the trial with the minimum elapsed time measured during the benchmark.

source

BenchmarkTools.@ballocations — Macro

@ballocations expression [other parameters...]

Similar to the @allocations macro included with Julia, this macro evaluates an expression, discarding the resulting value, and returns the total number of allocations made during its execution.

Unlike @allocations, it uses the @benchmark macro from the BenchmarkTools package, and accepts all of the same additional parameters as @benchmark. The returned number of allocations corresponds to the trial with the minimum elapsed time measured during the benchmark.

source

BenchmarkTools.@belapsed — Macro

@belapsed expression [other parameters...]

Similar to the @elapsed macro included with Julia, this returns the elapsed time (in seconds) to execute a given expression. It uses the @benchmark macro, however, and accepts all of the same additional parameters as @benchmark. The returned time is the minimum elapsed time measured during the benchmark.

source

BenchmarkTools.@benchmark — Macro

@benchmark <expr to benchmark> [setup=<setup expr>]

Run benchmark on a given expression.

Example

The simplest usage of this macro is to put it in front of what you want to benchmark.

julia> @benchmark sin(1)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     13.610 ns (0.00% GC)
  median time:      13.622 ns (0.00% GC)
  mean time:        13.638 ns (0.00% GC)
  maximum time:     21.084 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     998

You can interpolate values into @benchmark expressions:

# rand(1000) is executed for each evaluation
julia> @benchmark sum(rand(1000))
BenchmarkTools.Trial:
  memory estimate:  7.94 KiB
  allocs estimate:  1
  --------------
  minimum time:     1.566 μs (0.00% GC)
  median time:      2.135 μs (0.00% GC)
  mean time:        3.071 μs (25.06% GC)
  maximum time:     296.818 μs (95.91% GC)
  --------------
  samples:          10000
  evals/sample:     10

# rand(1000) is evaluated at definition time, and the resulting
# value is interpolated into the benchmark expression
julia> @benchmark sum($(rand(1000)))
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     101.627 ns (0.00% GC)
  median time:      101.909 ns (0.00% GC)
  mean time:        103.834 ns (0.00% GC)
  maximum time:     276.033 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     935

source

BenchmarkTools.@benchmarkable — Macro

@benchmarkable <expr to benchmark> [setup=<setup expr>]

Create a Benchmark instance for the given expression. @benchmarkable has similar syntax with @benchmark. See also @benchmark.

source

[BenchmarkTools.@benchmarkset](#BenchmarkTools.@benchmarkset-Tuple{Any, Any}) — Macro

@benchmarkset "title" begin ... end

Create a benchmark set, or multiple benchmark sets if a for loop is provided.

Instead, add to group = BenchmarkGroup() using group[key] = @benchmark...

Examples

@benchmarkset "suite" for k in 1:5
    @case "case <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>k</mi><mi mathvariant="normal">&quot;</mi><mi>r</mi><mi>a</mi><mi>n</mi><mi>d</mi><mo stretchy="false">(</mo></mrow><annotation encoding="application/x-tex">k&quot; rand(</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord mathnormal" style="margin-right:0.03148em;">k</span><span class="mord">&quot;</span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mord mathnormal">an</span><span class="mord mathnormal">d</span><span class="mopen">(</span></span></span></span>k, $k)
end

source

BenchmarkTools.@bprofile — Macro

@bprofile expression [other parameters...]

Run @benchmark while profiling. This is similar to

@profile @benchmark expression [other parameters...]

but the profiling is applied only to the main execution (after compilation and tuning). The profile buffer is cleared prior to execution.

View the profile results with Profile.print(...). See the profiling section of the Julia manual for more information.

source

BenchmarkTools.@btime — Macro

@btime expression [other parameters...]

Similar to the @time macro included with Julia, this executes an expression, printing the time it took to execute and the memory allocated before returning the value of the expression.

Unlike @time, it uses the @benchmark macro, and accepts all of the same additional parameters as @benchmark. The printed time is the minimum elapsed time measured during the benchmark.

source

BenchmarkTools.@btimed — Macro

@btimed expression [other parameters...]

Similar to the @timed macro included with Julia, this macro executes an expression and returns a NamedTuple containing the value of the expression, the minimum elapsed time in seconds, the total bytes allocated, the number of allocations, and the garbage collection time in seconds during the benchmark.

Unlike @timed, it uses the @benchmark macro from the BenchmarkTools package for more detailed and consistent performance measurements. The elapsed time reported is the minimum time measured during the benchmark. It accepts all additional parameters supported by @benchmark.

source

[BenchmarkTools.@case](#BenchmarkTools.@case-Tuple{Any, Vararg{Any}}) — Macro

@case title <expr to benchmark> [setup=<setup expr>]

Mark an expression as a benchmark case. Must be used inside [@benchmarkset](#BenchmarkTools.@benchmarkset-Tuple{Any, Any}).

Instead, add to group = BenchmarkGroup() using group[key] = @benchmark...

source

Base.run — Function

run(b::Benchmark[, p::Parameters = b.params]; kwargs...)

Run the benchmark defined by @benchmarkable.

source

run(group::BenchmarkGroup[, args...]; verbose::Bool = false, pad = "", kwargs...)

Run the benchmark group, with benchmark parameters set to group's by default.

source

BenchmarkTools.save — Function

BenchmarkTools.save(filename, args...)

Save serialized benchmarking objects (e.g. results or parameters) to a JSON file.

source

BenchmarkTools.load — Function

BenchmarkTools.load(filename)

Load serialized benchmarking objects (e.g. results or parameters) from a JSON file.

source