Skip to content

Commit

Permalink
Implement feedback and add additional test
Browse files Browse the repository at this point in the history
This adds a test to make sure callbacks are run in the dry runs, and
also makes sure it's working properly.
  • Loading branch information
devonestes committed Jan 31, 2018
1 parent aefe802 commit a0c676c
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 15 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ everything will run without error before running the full set of benchmarks.

### Features (User Facing)
* new `dry_run` configuration option which allows users to add a dry run of all
benchmarks with each input before running the actual suite. This should save
benchmarks with each input before running the actual suite. This should save
time while actually writing the code for your benchmarks.

## 0.12.0 (2018-01-20)
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ The available options are the following (also documented in [hexdocs](https://he

* `warmup` - the time in seconds for which a benchmarking job should be run without measuring times before real measurements start. This simulates a _"warm"_ running system. Defaults to 2.
* `time` - the time in seconds for how long each individual benchmarking job should be run and measured. Defaults to 5.
* `dry_run` - whether or not to run each job with each input to ensure that your code executes without error. This can save time while developing your suites.
* `dry_run` - whether or not to run each job with each input - including all given before or after scenario or each hooks - before the benchmarks are measured to ensure that your code executes without error. This can save time while developing your suites. Defaults to `false`.
* `inputs` - a map from descriptive input names to some different input, your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs).
* `parallel` - the function of each benchmarking job will be executed in `parallel` number processes. If `parallel: 4` then 4 processes will be spawned that all execute the _same_ function for the given time. When these finish/the time is up 4 new processes will be spawned for the next job/function. This gives you more data in the same time, but also puts a load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking [check the wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking). Defaults to 1 (no parallel execution).
* `formatters` - list of formatters either as module implementing the formatter behaviour or formatter functions. They are run when using `Benchee.run/2`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter `Benchee.Formatters.Console`. See [Formatters](#formatters).
Expand Down
6 changes: 2 additions & 4 deletions lib/benchee/benchmark/runner.ex
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,8 @@ defmodule Benchee.Benchmark.Runner do
"""
@spec run_scenarios([Scenario.t()], ScenarioContext.t()) :: [Scenario.t()]
def run_scenarios(scenarios, scenario_context) do
Enum.map(scenarios, fn scenario ->
parallel_benchmark(scenario, scenario_context)
end)
_ = Enum.map(scenarios, fn scenario -> dry_run(scenario, scenario_context) end)
Enum.map(scenarios, fn scenario -> parallel_benchmark(scenario, scenario_context) end)
end

defp parallel_benchmark(
Expand All @@ -38,7 +37,6 @@ defmodule Benchee.Benchmark.Runner do
}
) do
printer.benchmarking(job_name, input_name, config)
dry_run(scenario, scenario_context)

measurements =
1..config.parallel
Expand Down
5 changes: 5 additions & 0 deletions lib/benchee/configuration.ex
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ defmodule Benchee.Configuration do
parallel: integer,
time: number,
warmup: number,
dry_run: boolean,
formatters: [(Suite.t() -> Suite.t())],
print: map,
inputs: %{Suite.key() => any} | nil,
Expand Down Expand Up @@ -75,6 +76,10 @@ defmodule Benchee.Configuration do
how often it is executed). Defaults to 5.
* `warmup` - the time in seconds for which the benchmarking function
should be run without gathering results. Defaults to 2.
* `dry_run` - whether or not to run each job with each input - including all
given before or after scenario or each hooks - before the benchmarks are
measured to ensure that your code executes without error. This can save time
while developing your suites. Defaults to `false`.
* `inputs` - a map from descriptive input names to some different input,
your benchmarking jobs will then be run with each of these inputs. For this
to work your benchmarking function gets the current input passed in as an
Expand Down
47 changes: 38 additions & 9 deletions test/benchee/benchmark/runner_test.exs
Original file line number Diff line number Diff line change
Expand Up @@ -357,12 +357,7 @@ defmodule Benchee.Benchmark.RunnerTest do
)
|> Benchmark.measure(TestPrinter)

assert_received_exactly([
:before_scenario,
:before,
:after,
:after_scenario
])
assert_received_exactly([:before_scenario, :before, :after, :after_scenario])
end

test "hooks trigger during warmup and runtime but scenarios once" do
Expand Down Expand Up @@ -839,19 +834,53 @@ defmodule Benchee.Benchmark.RunnerTest do
end

test "runs all benchmarks with all inputs exactly once as a dry run" do
ref = self()
me = self()

inputs = %{"small" => 1, "big" => 100}

config = %{time: 0, warmup: 0, inputs: inputs, dry_run: true}

%Suite{configuration: config}
|> test_suite
|> Benchmark.benchmark("first", fn input -> send(ref, {:first, input}) end)
|> Benchmark.benchmark("second", fn input -> send(ref, {:second, input}) end)
|> Benchmark.benchmark("first", fn input -> send(me, {:first, input}) end)
|> Benchmark.benchmark("second", fn input -> send(me, {:second, input}) end)
|> Benchmark.measure(TestPrinter)

assert_received_exactly([{:first, 100}, {:first, 1}, {:second, 100}, {:second, 1}])
end

test "runs all hooks as part of a dry run" do
me = self()

config = %{time: 100, warmup: 100, dry_run: true}

try do
%Suite{configuration: config}
|> test_suite
|> Benchmark.benchmark("first", fn -> send(me, :first) end)
|> Benchmark.benchmark(
"job",
{fn -> send(me, :second) end,
before_each: fn input ->
send(me, :before)
input
end,
after_each: fn _ -> send(me, :after) end,
before_scenario: fn input ->
send(me, :before_scenario)
input
end,
after_scenario: fn _ ->
send(me, :after_scenario)
raise "This fails!"
end}
)
|> Benchmark.measure(TestPrinter)
rescue
RuntimeError -> nil
end

assert_received_exactly([:first, :before_scenario, :before, :second, :after, :after_scenario])
end
end
end

0 comments on commit a0c676c

Please sign in to comment.