PkgBenchmark.BenchmarkConfigPkgBenchmark.BenchmarkConfigPkgBenchmark.BenchmarkJudgementPkgBenchmark.benchmarkpkgPkgBenchmark.export_markdownPkgBenchmark.readresultsPkgBenchmark.writeresults
PkgBenchmark.BenchmarkConfig — Type.BenchmarkConfigA BenchmarkConfig contains the configuration for the benchmarks to be executed by benchmarkpkg.
This includes the following:
The commit of the package the benchmarks are run on.
What julia command should be run, i.e. the path to the Julia executable and the command flags used (e.g. optimization level with
-O).Custom environment variables (e.g.
JULIA_NUM_THREADS).
PkgBenchmark.BenchmarkConfig — Method.BenchmarkConfig(;id::Union{String, Void} = nothing,
juliacmd::Cmd = `joinpath(JULIA_HOME, Base.julia_exename())`,
env::Dict{String, Any} = Dict{String, Any}())Creates a BenchmarkConfig from the following keyword arguments:
id- A git identifier like a commit, branch, tag, "HEAD", "HEAD~1" etc. Ifid == nothingthen benchmark will be done on the current state of the repo (even if it is dirty).juliacmd- Used to exectue the benchmarks, defaults to the julia executable that the Pkgbenchmark-functions are called from. Can also include command flags.env- Contains custom environment variables that will be active when the benchmarks are run.
Examples
julia> using Pkgbenchmark
julia> BenchmarkConfig(id = "performance_improvements",
juliacmd = `julia -O3`,
env = Dict("JULIA_NUM_THREADS" => 4))
BenchmarkConfig:
id: performance_improvements
juliacmd: `julia -O3`
env: JULIA_NUM_THREADS => 4PkgBenchmark.BenchmarkJudgement — Type.Stores the results from running a judgement, see judge.
The following (unexported) methods are defined on a BenchmarkJudgement (written below as judgement):
target_result(judgement)::BenchmarkResults- theBenchmarkResultsof thetarget.baseline_result(judgement)::BenchmarkResults- theBenchmarkResultsof thebaseline.benchmarkgroup(judgement)::BenchmarkGroup- aBenchmarkGroupcontaning the estimated results
A BenchmarkJudgement can be exported to markdown using the function export_markdown.
See also BenchmarkResults
BenchmarkTools.judge — Function.judge(target::BenchmarkResults, baseline::BenchmarkResults, f;
judgekwargs = Dict())Judges the two BenchmarkResults in target and baseline using the function f.
Return value
Returns a BenchmarkJudgement
BenchmarkTools.judge — Method.judge(pkg::String,
[target]::Union{String, BenchmarkConfig},
baseline::Union{String, BenchmarkConfig};
kwargs...)Arguments:
pkg- The package to benchmark.target- What do judge, given as a git id or aBenchmarkConfig. If skipped, use the current state of the package repo.baseline- The commit /BenchmarkConfigto comparetargetagainst.
Keyword arguments:
f- Estimator function to use in the judging.judgekwargs::Dict{Symbol, Any}- keyword arguments to pass to thejudgefunction in BenchmarkTools
The remaining keyword arguments are passed to benchmarkpkg
Return value:
Returns a BenchmarkJudgement
PkgBenchmark.benchmarkpkg — Function.benchmarkpkg(pkg, [target]::Union{String, BenchmarkConfig}; kwargs...)Run a benchmark on the package pkg using the BenchmarkConfig or git identifier target. Examples of git identifiers are commit shas, branch names, or e.g. "HEAD~1". Return a BenchmarkResults.
The argument pkg can be a name of a package or a path to a directory to a package.
Keyword arguments:
script- The script with the benchmarks, if not given, defaults tobenchmark/benchmarks.jlin the package folder.resultfile- If set, saves the output toresultfileretune- Force a re-tune, saving the new tuning to the tune file.
The result can be used by functions such as judge. If you choose to, you can save the results manually using writeresults where results is the return value of this function. It can be read back with readresults.
If a REQUIRE file exists in the same folder as script, load package requirements from that file before benchmarking.
Example invocations:
using PkgBenchmark
benchmarkpkg("MyPkg") # run the benchmarks at the current state of the repository
benchmarkpkg("MyPkg", "my-feature") # run the benchmarks for a particular branch/commit/tag
benchmarkpkg("MyPkg", "my-feature"; script="/home/me/mycustombenchmark.jl")
benchmarkpkg("MyPkg", BenchmarkConfig(id = "my-feature",
env = Dict("JULIA_NUM_THREADS" => 4),
juliacmd = `julia -O3`))PkgBenchmark.export_markdown — Method.export_markdown(file::String, results::Union{BenchmarkResults, BenchmarkJudgement})
export_markdown(io::IO, results::Union{BenchmarkResults, BenchmarkJudgement})Writes the results to file or io in markdown format.
See also: BenchmarkResults, BenchmarkJudgement
PkgBenchmark.readresults — Method.readresults(file::String)Reads the BenchmarkResults stored in file (given as a path).
PkgBenchmark.writeresults — Method.writeresults(file::String, results::BenchmarkResults)Writes the BenchmarkResults to file.