Skip to content

Commit

Permalink
Rename dry_run to pre_check
Browse files Browse the repository at this point in the history
  • Loading branch information
devonestes committed Feb 2, 2018
1 parent 7a8ef14 commit 3c7118f
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 14 deletions.
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
## 0.13.0 (2018-??-??)

Adds the ability to run a `dry_run` of your benchmarks if you want to make sure
Adds the ability to run a `pre_check` of your benchmarks if you want to make sure
everything will run without error before running the full set of benchmarks.

### Features (User Facing)
* new `dry_run` configuration option which allows users to add a dry run of all
* new `pre_check` configuration option which allows users to add a dry run of all
benchmarks with each input before running the actual suite. This should save
time while actually writing the code for your benchmarks.

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ The available options are the following (also documented in [hexdocs](https://he

* `warmup` - the time in seconds for which a benchmarking job should be run without measuring times before real measurements start. This simulates a _"warm"_ running system. Defaults to 2.
* `time` - the time in seconds for how long each individual benchmarking job should be run and measured. Defaults to 5.
* `dry_run` - whether or not to run each job with each input - including all given before or after scenario or each hooks - before the benchmarks are measured to ensure that your code executes without error. This can save time while developing your suites. Defaults to `false`.
* `pre_check` - whether or not to run each job with each input - including all given before or after scenario or each hooks - before the benchmarks are measured to ensure that your code executes without error. This can save time while developing your suites. Defaults to `false`.
* `inputs` - a map from descriptive input names to some different input, your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs).
* `parallel` - the function of each benchmarking job will be executed in `parallel` number processes. If `parallel: 4` then 4 processes will be spawned that all execute the _same_ function for the given time. When these finish/the time is up 4 new processes will be spawned for the next job/function. This gives you more data in the same time, but also puts a load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking [check the wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking). Defaults to 1 (no parallel execution).
* `formatters` - list of formatters either as module implementing the formatter behaviour or formatter functions. They are run when using `Benchee.run/2`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter `Benchee.Formatters.Console`. See [Formatters](#formatters).
Expand Down
6 changes: 3 additions & 3 deletions lib/benchee/benchmark/runner.ex
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ defmodule Benchee.Benchmark.Runner do
"""
@spec run_scenarios([Scenario.t()], ScenarioContext.t()) :: [Scenario.t()]
def run_scenarios(scenarios, scenario_context) do
Enum.each(scenarios, fn scenario -> dry_run(scenario, scenario_context) end)
Enum.each(scenarios, fn scenario -> pre_check(scenario, scenario_context) end)
Enum.map(scenarios, fn scenario -> parallel_benchmark(scenario, scenario_context) end)
end

Expand All @@ -48,15 +48,15 @@ defmodule Benchee.Benchmark.Runner do

# This will run the given scenario exactly once, including the before and
# after hooks, to ensure the function can execute without raising an error.
defp dry_run(scenario, scenario_context = %ScenarioContext{config: %{dry_run: true}}) do
defp pre_check(scenario, scenario_context = %ScenarioContext{config: %{pre_check: true}}) do
scenario_input = run_before_scenario(scenario, scenario_context)
scenario_context = %ScenarioContext{scenario_context | scenario_input: scenario_input}
measure_iteration(scenario, scenario_context)
run_after_scenario(scenario, scenario_context)
nil
end

defp dry_run(_, _), do: nil
defp pre_check(_, _), do: nil

defp measure_scenario(scenario, scenario_context) do
scenario_input = run_before_scenario(scenario, scenario_context)
Expand Down
6 changes: 3 additions & 3 deletions lib/benchee/configuration.ex
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ defmodule Benchee.Configuration do
defstruct parallel: 1,
time: 5,
warmup: 2,
dry_run: false,
pre_check: false,
formatters: [Console],
print: %{
benchmarking: true,
Expand Down Expand Up @@ -45,7 +45,7 @@ defmodule Benchee.Configuration do
parallel: integer,
time: number,
warmup: number,
dry_run: boolean,
pre_check: boolean,
formatters: [(Suite.t() -> Suite.t())],
print: map,
inputs: %{Suite.key() => any} | nil,
Expand Down Expand Up @@ -76,7 +76,7 @@ defmodule Benchee.Configuration do
how often it is executed). Defaults to 5.
* `warmup` - the time in seconds for which the benchmarking function
should be run without gathering results. Defaults to 2.
* `dry_run` - whether or not to run each job with each input - including all
* `pre_check` - whether or not to run each job with each input - including all
given before or after scenario or each hooks - before the benchmarks are
measured to ensure that your code executes without error. This can save time
while developing your suites. Defaults to `false`.
Expand Down
10 changes: 5 additions & 5 deletions test/benchee/benchmark/runner_test.exs
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ defmodule Benchee.Benchmark.RunnerTest do
time: 40_000,
warmup: 20_000,
inputs: nil,
dry_run: false,
pre_check: false,
print: %{fast_warning: false, configuration: true}
}
@system %{
Expand Down Expand Up @@ -833,12 +833,12 @@ defmodule Benchee.Benchmark.RunnerTest do
])
end

test "runs all benchmarks with all inputs exactly once as a dry run" do
test "runs all benchmarks with all inputs exactly once as a pre check" do
me = self()

inputs = %{"small" => 1, "big" => 100}

config = %{time: 0, warmup: 0, inputs: inputs, dry_run: true}
config = %{time: 0, warmup: 0, inputs: inputs, pre_check: true}

%Suite{configuration: config}
|> test_suite
Expand All @@ -849,10 +849,10 @@ defmodule Benchee.Benchmark.RunnerTest do
assert_received_exactly([{:first, 100}, {:first, 1}, {:second, 100}, {:second, 1}])
end

test "runs all hooks as part of a dry run" do
test "runs all hooks as part of a pre check" do
me = self()

config = %{time: 100, warmup: 100, dry_run: true}
config = %{time: 100, warmup: 100, pre_check: true}

try do
%Suite{configuration: config}
Expand Down

0 comments on commit 3c7118f

Please sign in to comment.