Reference
BenchmarkConfig

A BenchmarkConfig contains the configuration for the benchmarks to be executed by benchmarkpkg.

This includes the following:

  • The commit of the package the benchmarks are run on.

  • What julia command should be run, i.e. the path to the Julia executable and the command flags used (e.g. optimization level with -O).

  • Custom environment variables (e.g. JULIA_NUM_THREADS).

source
BenchmarkConfig(;id::Union{String, Void} = nothing,
                 juliacmd::Cmd = `joinpath(JULIA_HOME, Base.julia_exename())`,
                 env::Dict{String, Any} = Dict{String, Any}())

Creates a BenchmarkConfig from the following keyword arguments:

  • id - A git identifier like a commit, branch, tag, "HEAD", "HEAD~1" etc. If id == nothing then benchmark will be done on the current state of the repo (even if it is dirty).

  • juliacmd - Used to exectue the benchmarks, defaults to the julia executable that the Pkgbenchmark-functions are called from. Can also include command flags.

  • env - Contains custom environment variables that will be active when the benchmarks are run.

Examples

julia> using Pkgbenchmark

julia> BenchmarkConfig(id = "performance_improvements",
                       juliacmd = `julia -O3`,
                       env = Dict("JULIA_NUM_THREADS" => 4))
BenchmarkConfig:
    id: performance_improvements
    juliacmd: `julia -O3`
    env: JULIA_NUM_THREADS => 4
source

Stores the results from running a judgement, see judge.

The following (unexported) methods are defined on a BenchmarkJudgement (written below as judgement):

  • target_result(judgement)::BenchmarkResults - the BenchmarkResults of the target.

  • baseline_result(judgement)::BenchmarkResults - the BenchmarkResults of the baseline.

  • benchmarkgroup(judgement)::BenchmarkGroup - a BenchmarkGroup contaning the estimated results

A BenchmarkJudgement can be exported to markdown using the function export_markdown.

See also BenchmarkResults

source
BenchmarkTools.judgeFunction.
judge(target::BenchmarkResults, baseline::BenchmarkResults, f;
      judgekwargs = Dict())

Judges the two BenchmarkResults in target and baseline using the function f.

Return value

Returns a BenchmarkJudgement

source
judge(pkg::String,
      [target]::Union{String, BenchmarkConfig},
      baseline::Union{String, BenchmarkConfig};
      kwargs...)

Arguments:

  • pkg - The package to benchmark.

  • target - What do judge, given as a git id or a BenchmarkConfig. If skipped, use the current state of the package repo.

  • baseline - The commit / BenchmarkConfig to compare target against.

Keyword arguments:

  • f - Estimator function to use in the judging.

  • judgekwargs::Dict{Symbol, Any} - keyword arguments to pass to the judge function in BenchmarkTools

The remaining keyword arguments are passed to benchmarkpkg

Return value:

Returns a BenchmarkJudgement

source
benchmarkpkg(pkg, [target]::Union{String, BenchmarkConfig}; kwargs...)

Run a benchmark on the package pkg using the BenchmarkConfig or git identifier target. Examples of git identifiers are commit shas, branch names, or e.g. "HEAD~1". Return a BenchmarkResults.

The argument pkg can be a name of a package or a path to a directory to a package.

Keyword arguments:

  • script - The script with the benchmarks, if not given, defaults to benchmark/benchmarks.jl in the package folder.

  • resultfile - If set, saves the output to resultfile

  • retune - Force a re-tune, saving the new tuning to the tune file.

The result can be used by functions such as judge. If you choose to, you can save the results manually using writeresults where results is the return value of this function. It can be read back with readresults.

If a REQUIRE file exists in the same folder as script, load package requirements from that file before benchmarking.

Example invocations:

using PkgBenchmark

benchmarkpkg("MyPkg") # run the benchmarks at the current state of the repository
benchmarkpkg("MyPkg", "my-feature") # run the benchmarks for a particular branch/commit/tag
benchmarkpkg("MyPkg", "my-feature"; script="/home/me/mycustombenchmark.jl")
benchmarkpkg("MyPkg", BenchmarkConfig(id = "my-feature",
                                      env = Dict("JULIA_NUM_THREADS" => 4),
                                      juliacmd = `julia -O3`))
source
export_markdown(file::String, results::Union{BenchmarkResults, BenchmarkJudgement})
export_markdown(io::IO,       results::Union{BenchmarkResults, BenchmarkJudgement})

Writes the results to file or io in markdown format.

See also: BenchmarkResults, BenchmarkJudgement

source
readresults(file::String)

Reads the BenchmarkResults stored in file (given as a path).

source
writeresults(file::String, results::BenchmarkResults)

Writes the BenchmarkResults to file.

source