Skip to content

Commit

Permalink
Merge 35c0a25 into 4180cb0
Browse files Browse the repository at this point in the history
  • Loading branch information
KristofferC committed Aug 1, 2017
2 parents 4180cb0 + 35c0a25 commit c34b1e4
Show file tree
Hide file tree
Showing 14 changed files with 437 additions and 141 deletions.
9 changes: 5 additions & 4 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@ makedocs(
sitename = "PkgBenchmark.jl",
pages = Any[
"Home" => "index.md",
"Manual" => [
"man/define_benchmarks.md",
"man/run_benchmarks.md",
]
"define_benchmarks.md",
"run_benchmarks.md",
"comparing_commits.md",
"export_markdown.md",
"Reference" => "ref.md",
]
)

Expand Down
7 changes: 7 additions & 0 deletions docs/src/comparing_commits.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Comparing commits

You can use `judge` to compare benchmark results of two versions of the package.

```@docs
judge
```
File renamed without changes.
7 changes: 7 additions & 0 deletions docs/src/export_markdown.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Export to markdown

It is possible to export results from [`PkgBenchmark.BenchmarkResults`](@ref) using the function `export_markdown`

```@docs
export_markdown
```
21 changes: 0 additions & 21 deletions docs/src/man/run_benchmarks.md

This file was deleted.

9 changes: 9 additions & 0 deletions docs/src/ref.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
```@index
Pages = ["ref.md"]
Modules = [PkgBenchmark]
```

```@autodocs
Modules = [PkgBenchmark]
Private = false
```
59 changes: 59 additions & 0 deletions docs/src/run_benchmarks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
```@meta
DocTestSetup = quote
using PkgBenchmark
end
```

# Running a benchmark suite

Use `benchmarkpkg` to run benchmarks defined in a suite as defined in the previous section.

```@docs
benchmarkpkg
```

The results of a benchmark is returned as a `BenchmarkResult`

```@docs
PkgBenchmark.BenchmarkResults
```

## More advanced customization

Instead of passing a commit, branch etc. as a `String` to `benchmarkpkg`, a [`BenchmarkConfig`](@ref) can be passed.
This object contains the package commit, julia command, and what environment variables will
be used when benchmarking. The default values can be seen by using the default constructor

```julia-repl
julia> BenchmarkConfig()
BenchmarkConfig:
id: nothing
juliacmd: `/home/user/julia/julia`
env:
```

The `id` is a commit, branch etc as described in the previous section. An `id` with value `nothing` means that the current state of the package will be benchmarked.
The default value of `juliacmd` is `joinpath(JULIA_HOME, Base.julia_exename()` which is the command to run the julia executable without any command line arguments.

To instead benchmark the branch `PR`, using the julia command `julia -O3`
with the environment variable `JULIA_NUM_THREADS` set to `4`, the config would be created as

```jldoctest
julia> config = BenchmarkConfig(id = "PR",
juliacmd = `julia -O3`,
env = Dict("JULIA_NUM_THREADS" => 4))
BenchmarkConfig:
id: PR
juliacmd: `julia -O3`
env: JULIA_NUM_THREADS => 4
```

To benchmark the package with the config, call [`benchmarkpkg`](@ref) as e.g.

```julia
benchmark("Tensors", config)
```

!!! info
The `id` keyword to the `BenchmarkConfig` does not have to be a branch, it can be most things that git can understand, for example a commit id
or a tag.
7 changes: 5 additions & 2 deletions src/PkgBenchmark.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,16 @@ module PkgBenchmark
using BenchmarkTools
using FileIO
using JLD
using Compat

export benchmarkpkg, judge, @benchgroup, @bench, writeresults, readresults
export benchmarkpkg, judge, @benchgroup, @bench, writeresults, readresults, export_markdown
export BenchmarkConfig, BenchmarkResults

include("util.jl")
include("macros.jl")
include("benchmarkconfig.jl")
include("benchmarkresults.jl")
include("runbenchmark.jl")
include("judge.jl")
include("util.jl")

end # module
78 changes: 78 additions & 0 deletions src/benchmarkconfig.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
"""
BenchmarkConfig
A `BenchmarkConfig` contains the configuration for the benchmarks to be executed
by [`benchmarkpkg`](@ref).
This includes the following:
* The commit of the package the benchmarks are run on.
* What julia command should be run, i.e. the path to the Julia executable and
the command flags used (e.g. optimization level with `-O`).
* Custom environment variables (e.g. `JULIA_NUM_THREADS`).
"""
struct BenchmarkConfig
id::Union{String,Void}
juliacmd::Cmd
env::Dict{String,Any}
end

function _hash(pkgname::String, pkgcommit::String, juliacommit, config::BenchmarkConfig)
return hash(pkgname,
hash(juliacommit,
hash(length(config.juliacmd) > 1 ? config.juliacmd[2:end] : 0,
hash(pkgcommit,
hash(config.env)))))
end

"""
BenchmarkConfig(;id::Union{String, Void} = nothing,
juliacmd::Cmd = `$(joinpath(JULIA_HOME, Base.julia_exename()))`,
env::Dict{String, Any} = Dict{String, Any}())
Creates a `BenchmarkConfig` from the following keyword arguments:
* `id` - A git identifier like a commit, branch, tag, "HEAD", "HEAD~1" etc.
If `id == nothing` then benchmark will be done on the current state
of the repo (even if it is dirty).
* `juliacmd` - Used to exectue the benchmarks, defaults to the julia executable
that the Pkgbenchmark-functions are called from. Can also include command flags.
* `env` - Contains custom environment variables that will be active when the
benchmarks are run.
# Examples
```julia
BenchmarkConfig(id = "performance_improvements",
juliacmd = `julia -O3`,
env = Dict("JULIA_NUM_THREADS" => 4))
```
"""
function BenchmarkConfig(;id::Union{String,Void} = nothing,

juliacmd::Cmd = `$(joinpath(JULIA_HOME, Base.julia_exename()))`,
env::Dict = Dict{String,Any}())
BenchmarkConfig(id, juliacmd, env)
end

BenchmarkConfig(cfg::BenchmarkConfig) = cfg
BenchmarkConfig(str::String) = BenchmarkConfig(id = str)
BenchmarkConfig(::Void) = BenchmarkConfig()

const INDENT = " "

function Base.show(io::IO, bcfg::BenchmarkConfig)
println(io, "BenchmarkConfig:")
println(io, INDENT, "id: ", bcfg.id)
println(io, INDENT, "juliacmd: ", bcfg.juliacmd)
print(io, INDENT, "env: ")
if !isempty(bcfg.env)
first = true
for (k, v) in bcfg.env
if !first
println(io)
print(io, INDENT, " "^strwidth("env: "))
end
first = false
print(io, k, " => ", v)
end
end
end
110 changes: 97 additions & 13 deletions src/benchmarkresults.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,20 @@ The following (unexported) methods are defined on a `BenchmarkResults` (written
* `name(results)::String` - the commit of the package benchmarked
* `commit(results)::String` - the commit of the package benchmarked. If the package repository was dirty, the string `"dirty"` is returned.
* `juliacommit(results)::String` - the commit of the Julia executable that ran the benchmarks
* `benchmarkgroup(results)::BenchmarkGroup` - a [`BenchmarkGroup`](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#the-benchmarkgroup-type)
contaning the results of the benchmark.
* `date(results)::DateTime` - the time when the benchmarks were executed
`BenchmarkResults` can be exported to markdown using the function [`export_markdown`](@ref).
"""
immutable BenchmarkResults
struct BenchmarkResults
name::String
commit::String
benchmarkgroup::BenchmarkGroup
date::DateTime
julia_commit::String
vinfo::String
end

Base.:(==)(r1::BenchmarkResults, r2::BenchmarkResults) = r1.name == r2.name &&
Expand All @@ -23,21 +28,100 @@ Base.:(==)(r1::BenchmarkResults, r2::BenchmarkResults) = r1.name == r2

name(results::BenchmarkResults) = results.name
commit(results::BenchmarkResults) = results.commit
juliacommit(results::BenchmarkResults) = results.julia_commit
benchmarkgroup(results::BenchmarkResults) = results.benchmarkgroup
date(results::BenchmarkResults) = results.date
Base.versioninfo(results::BenchmarkResults) = results.vinfo

function Base.show(io::IO, results::BenchmarkResults)
if get(io, :limit, false)
print(io, "Benchmarkresults for ", results.name)
else
print_with_color(:nothing, "Benchmarkresults:\n"; bold = true)
println(io, " Package: ", results.name)
println(io, " Date: ", Base.Dates.format(results.date, "m u Y - H:M"))
println(io , " Package commit: ", results.commit)
iob = IOBuffer()
ioc = IOContext(iob)
show(ioc, MIME("text/plain"), results.benchmarkgroup)
println(io, " BenchmarkGroup:")
print(join(" " .* split(String(take!(iob)), "\n"), "\n"))
print(io, "Benchmarkresults:\n")
println(io, " Package: ", results.name)
println(io, " Date: ", Base.Dates.format(results.date, "m u Y - H:M"))
println(io, " Package commit: ", results.commit[1:min(length(results.commit), 6)])
println(io, " Julia commit: ", results.julia_commit[1:6])
iob = IOBuffer()
ioc = IOContext(iob)
show(ioc, MIME("text/plain"), results.benchmarkgroup)
println(io, " BenchmarkGroup:")
print(join(" " .* split(String(take!(iob)), "\n"), "\n"))
end


"""
export_markdown(file::String, results::BenchmarkResults)
export_markdown(io::IO, results::BenchmarkResults)
Writes the `results` to `file` or `io` in markdown format.
See also: [`BenchmarkResults`](@ref)
"""
function export_markdown(file::String, results::BenchmarkResults)
open(file, "w") do f
export_markdown(f, results)
end
end

function export_markdown(io::IO, results::BenchmarkResults)
println(io, """
# Benchmark Report for *$(name(results))*
## Job Properties
* Time of benchmark: $(Base.Dates.format(date(results), "m u Y - H:M"))
* Package commit: $(commit(results)[1:min(6, length(commit(results)))])
* Julia commit: $(juliacommit(results)[1:min(6, length(juliacommit(results)))])
""")

println(io, """
## Results
Below is a table of this job's results, obtained by running the benchmarks.
The values listed in the `ID` column have the structure `[parent_group, child_group, ..., key]`, and can be used to
index into the BaseBenchmarks suite to retrieve the corresponding benchmarks.
The percentages accompanying time and memory values in the below table are noise tolerances. The "true"
time/memory value for a given benchmark is expected to fall within this percentage of the reported value.
An empty cell means that the value was zero.
""")

print(io, """
| ID | time | GC time | memory | allocations |
|----|------|---------|--------|-------------|
""")

entries = BenchmarkTools.leaves(benchmarkgroup(results))
entries = entries[sortperm(map(x -> string(first(x)), entries))]


for (ids, t) in entries
println(io, resultrow(ids, t))
end
println(io)
println(io, """
## Benchmark Group List
Here's a list of all the benchmark groups executed by this job:
""")

for id in unique(map(pair -> pair[1][1:end-1], entries))
println(io, "- `", idrepr(id), "`")
end

println(io)
println(io, "## Versioninfo")
print(io, "```\n", versioninfo(results), "```")

return nothing
end

idrepr(id) = (str = repr(id); str[searchindex(str, '['):end])
intpercent(p) = string(ceil(Int, p * 100), "%")
resultrow(ids, t::BenchmarkTools.Trial) = resultrow(ids, minimum(t))

function resultrow(ids, t::BenchmarkTools.TrialEstimate)
t_tol = intpercent(BenchmarkTools.params(t).time_tolerance)
m_tol = intpercent(BenchmarkTools.params(t).memory_tolerance)
timestr = BenchmarkTools.time(t) == 0 ? "-" : string(BenchmarkTools.prettytime(BenchmarkTools.time(t)), " (", t_tol, ")")
memstr = BenchmarkTools.memory(t) == 0 ? "-" : string(BenchmarkTools.prettymemory(BenchmarkTools.memory(t)), " (", m_tol, ")")
gcstr = BenchmarkTools.gctime(t) == 0 ? "-" : BenchmarkTools.prettytime(BenchmarkTools.gctime(t))
allocstr = BenchmarkTools.allocs(t) == 0 ? "-" : string(BenchmarkTools.allocs(t))
return "| `$(idrepr(ids))` | $(timestr) | $(gcstr) | $(memstr) | $(allocstr) |"
end

resultmark(sym::Symbol) = sym == :regression ? REGRESS_MARK : (sym == :improvement ? IMPROVE_MARK : "")

0 comments on commit c34b1e4

Please sign in to comment.