References
BenchmarkTools.clear_empty!
— Methodclear_empty!(group::BenchmarkGroup)
Recursively remove any empty subgroups from group
.
Use this to prune a BenchmarkGroup
after accessing the incorrect fields, such as g=BenchmarkGroup(); g[1]
, without storing anything to g[1]
, which will create an empty subgroup g[1]
.
BenchmarkTools.tune!
— Functiontune!(b::Benchmark, p::Parameters = b.params; verbose::Bool = false, pad = "", kwargs...)
Tune a Benchmark
instance.
If the number of evals in the parameters p
has been set manually, this function does nothing.
BenchmarkTools.tune!
— Methodtune!(group::BenchmarkGroup; verbose::Bool = false, pad = "", kwargs...)
Tune a BenchmarkGroup
instance. For most benchmarks, tune!
needs to perform many evaluations to determine the proper parameters for any given benchmark - often more evaluations than are performed when running a trial. In fact, the majority of total benchmarking time is usually spent tuning parameters, rather than actually running trials.
BenchmarkTools.@ballocated
— Macro@ballocated expression [other parameters...]
Similar to the @allocated
macro included with Julia, this returns the number of bytes allocated when executing a given expression. It uses the @benchmark
macro, however, and accepts all of the same additional parameters as @benchmark
. The returned allocations correspond to the trial with the minimum elapsed time measured during the benchmark.
BenchmarkTools.@belapsed
— Macro@belapsed expression [other parameters...]
Similar to the @elapsed
macro included with Julia, this returns the elapsed time (in seconds) to execute a given expression. It uses the @benchmark
macro, however, and accepts all of the same additional parameters as @benchmark
. The returned time is the minimum elapsed time measured during the benchmark.
BenchmarkTools.@benchmark
— Macro@benchmark <expr to benchmark> [setup=<setup expr>]
Run benchmark on a given expression.
Example
The simplest usage of this macro is to put it in front of what you want to benchmark.
julia> @benchmark sin(1)
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 13.610 ns (0.00% GC)
median time: 13.622 ns (0.00% GC)
mean time: 13.638 ns (0.00% GC)
maximum time: 21.084 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 998
You can interpolate values into @benchmark
expressions:
# rand(1000) is executed for each evaluation
julia> @benchmark sum(rand(1000))
BenchmarkTools.Trial:
memory estimate: 7.94 KiB
allocs estimate: 1
--------------
minimum time: 1.566 μs (0.00% GC)
median time: 2.135 μs (0.00% GC)
mean time: 3.071 μs (25.06% GC)
maximum time: 296.818 μs (95.91% GC)
--------------
samples: 10000
evals/sample: 10
# rand(1000) is evaluated at definition time, and the resulting
# value is interpolated into the benchmark expression
julia> @benchmark sum($(rand(1000)))
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 101.627 ns (0.00% GC)
median time: 101.909 ns (0.00% GC)
mean time: 103.834 ns (0.00% GC)
maximum time: 276.033 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 935
BenchmarkTools.@benchmarkable
— Macro@benchmarkable <expr to benchmark> [setup=<setup expr>]
Create a Benchmark
instance for the given expression. @benchmarkable
has similar syntax with @benchmark
. See also @benchmark
.
BenchmarkTools.@benchmarkset
— Macro@benchmarkset "title" begin ... end
Create a benchmark set, or multiple benchmark sets if a for
loop is provided.
Examples
@benchmarkset "suite" for k in 1:5
@case "case $k" rand($k, $k)
end
BenchmarkTools.@bprofile
— Macro@bprofile expression [other parameters...]
Run @benchmark
while profiling. This is similar to
@profile @benchmark expression [other parameters...]
but the profiling is applied only to the main execution (after compilation and tuning). The profile buffer is cleared prior to execution.
View the profile results with Profile.print(...)
. See the profiling section of the Julia manual for more information.
BenchmarkTools.@btime
— Macro@btime expression [other parameters...]
Similar to the @time
macro included with Julia, this executes an expression, printing the time it took to execute and the memory allocated before returning the value of the expression.
Unlike @time
, it uses the @benchmark
macro, and accepts all of the same additional parameters as @benchmark
. The printed time is the minimum elapsed time measured during the benchmark.
BenchmarkTools.@case
— Macro@case title <expr to benchmark> [setup=<setup expr>]
Mark an expression as a benchmark case. Must be used inside @benchmarkset
.
Base.run
— Functionrun(b::Benchmark[, p::Parameters = b.params]; kwargs...)
Run the benchmark defined by @benchmarkable
.
run(group::BenchmarkGroup[, args...]; verbose::Bool = false, pad = "", kwargs...)
Run the benchmark group, with benchmark parameters set to group
's by default.
BenchmarkTools.save
— FunctionBenchmarkTools.save(filename, args...)
Save serialized benchmarking objects (e.g. results or parameters) to a JSON file.
BenchmarkTools.load
— FunctionBenchmarkTools.load(filename)
Load serialized benchmarking objects (e.g. results or parameters) from a JSON file.