Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Formatter option tuples introduction #242

Merged
merged 9 commits into from Aug 10, 2018
10 changes: 8 additions & 2 deletions CHANGELOG.md
Expand Up @@ -2,17 +2,23 @@

### Breaking Changes (User Facing)
* dropped support for Erlang 18.x
* Formatters no longer have an `output/1` method, instead use `Formatter.output/3` please
* Usage of `formatter_options` is deprecated, instead please use the new tuple way

### Features (User Facing)
* benchee now uses the maximum precision available to measure which on Linux and OSX is nanoseonds instead of microseconds. Somewhat surprisingly `:timer.tc/1` always cut down to microseconds although better precision is available.
* The preferred way to specify formatters and their options is to specify them as a tuple `{module, options}` instead of using `formatter_options`.
* New `Formatter.output/1` function that takes a suite and uses all configured formatters to output their results
* Add the concept of a benchmarking title that formatters can pick up
* the displayed percentiles can now be adjusted

### Breaking Changes (Plugins)
* all reported times are now in nanoseconds instead of microseconds
* formatter methods `format` and `write` now take 2 arguments each where the additional arguments is the options specified for this formatter so that you have direct access to it without peeling it from the suite
* You can no longer `use Benchee.Formatter` - just adopt the behaviour (no more auto generated `output/1` method, but `Formatter.output/3` takes that responsibility now)

### Features (User Facing)
* An optional title is now available in the suite
### Features (Plugins)
* An optional title is now available in the suite for you to display

## 0.13.2 (2018-08-02)

Expand Down
16 changes: 9 additions & 7 deletions README.md
Expand Up @@ -163,7 +163,7 @@ The available options are the following (also documented in [hexdocs](https://he
* `time` - the time in seconds for how long each individual benchmarking job should be run for measuring the execution times (run time performance). Defaults to 5.
* `memory_time` - the time in seconds for how long [memory measurements](measuring-memory-consumption) should be conducted. Defaults to 0 (turned off).
* `inputs` - a map from descriptive input names to some different input, your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs).
* `formatters` - list of formatters either as module implementing the formatter behaviour or formatter functions. They are run when using `Benchee.run/2`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter `Benchee.Formatters.Console`. See [Formatters](#formatters).
* `formatters` - list of formatters either as a module implementing the formatter behaviour, a tuple of said module and options it should take or formatter functions. They are run when using `Benchee.run/2` or you can invoktem them through `Benchee.Formatter.output/1`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter `Benchee.Formatters.Console`. See [Formatters](#formatters).
* `pre_check` - whether or not to run each job with each input - including all given before or after scenario or each hooks - before the benchmarks are measured to ensure that your code executes without error. This can save time while developing your suites. Defaults to `false`.
* `parallel` - the function of each benchmarking job will be executed in `parallel` number processes. If `parallel: 4` then 4 processes will be spawned that all execute the _same_ function for the given time. When these finish/the time is up 4 new processes will be spawned for the next job/function. This gives you more data in the same time, but also puts a load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking [check the wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking). Defaults to 1 (no parallel execution).
* `save` - specify a `path` where to store the results of the current benchmarking suite, tagged with the specified `tag`. See [Saving & Loading](#saving-loading-and-comparing-previous-runs).
Expand Down Expand Up @@ -251,7 +251,8 @@ Therefore, I **highly recommend** using this feature and checking different real
Among all the configuration options, one that you probably want to use are the formatters. They format and print out the results of the benchmarking suite.

The `:formatters` option is specified a list of:
* modules implementing the `Benchee.Formatter` behaviour, or...
* modules implementing the `Benchee.Formatter` behaviour
* a tuple of a module specified above and options for it `{module, options}`
* functions that take one argument (the benchmarking suite with all its results) and then do whatever you want them to

So if you are using the [HTML plugin](https://github.com/PragTob/benchee_html) and you want to run both the console formatter and the HTML formatter this looks like this (after you installed it of course):
Expand All @@ -265,10 +266,9 @@ Benchee.run(%{
"map.flatten" => fn -> list |> Enum.map(map_fun) |> List.flatten end
},
formatters: [
Benchee.Formatters.HTML,
{Benchee.Formatters.HTML, file: "samples_output/my.html"},
Benchee.Formatters.Console
],
formatter_options: [html: [file: "samples_output/my.html"]]
]
)
```

Expand All @@ -283,7 +283,7 @@ map_fun = fn(i) -> [i, i * i] end
Benchee.run(%{
"flat_map" => fn -> Enum.flat_map(list, map_fun) end,
"map.flatten" => fn -> list |> Enum.map(map_fun) |> List.flatten end
}, time: 10, formatter_options: %{console: %{extended_statistics: true}})
}, time: 10, formatters: [{Benchee.Formatters.Console, extended_statistics: true}])
```

Which produces:
Expand Down Expand Up @@ -640,7 +640,9 @@ Benchee.init(time: 3)
|> Benchee.measure
|> Benchee.statistics
|> Benchee.load # can be omitted when you don't want to/need to load scenarios
|> Benchee.Formatters.Console.output
|> Benchee.Formatter.output(Benchee.Formatters.Console, %{})
# Instead of the last call you could also just use Benchee.Formatter.output()
# to just output all configured formatters
```

This is a take on the _functional transformation_ of data applied to benchmarks:
Expand Down
30 changes: 1 addition & 29 deletions lib/benchee.ex
Expand Up @@ -49,44 +49,16 @@ for {module, moduledoc} <- [{Benchee, elixir_doc}, {:benchee, erlang_doc}] do
end

defp do_run(jobs, config) do
jobs
|> run_benchmarks(config)
|> output_results
end

defp run_benchmarks(jobs, config) do
config
|> Benchee.init()
|> Benchee.system()
|> add_benchmarking_jobs(jobs)
|> Benchee.measure()
|> Benchee.statistics()
|> Benchee.load()
|> Formatter.output()
end

defp output_results(suite = %{configuration: %{formatters: formatters}}) do
{parallelizable, serial} = Enum.split_with(formatters, &is_formatter_module?/1)

# why do we ignore this suite? It shouldn't be changed anyway.
_suite = Formatter.parallel_output(suite, parallelizable)

Enum.each(serial, fn output_function ->
output_function.(suite)
end)

suite
end

defp is_formatter_module?(formatter) when is_atom(formatter) do
module_attributes = formatter.module_info(:attributes)

module_attributes
|> Keyword.get(:behaviour, [])
|> Enum.member?(Benchee.Formatter)
end

defp is_formatter_module?(_), do: false

defp add_benchmarking_jobs(suite, jobs) do
Enum.reduce(jobs, suite, fn {key, function}, suite_acc ->
Benchee.benchmark(suite_acc, key, function)
Expand Down
73 changes: 42 additions & 31 deletions lib/benchee/configuration.ex
Expand Up @@ -95,12 +95,13 @@ defmodule Benchee.Configuration do
* `title` - this option is purely cosmetic. If you would like to add a
title with some meaning to a given suite, you can do so by providing
a single string here. This is only for use by formatters.
* `formatters` - list of formatters either as module implementing the
formatter behaviour or formatter functions. They are run when using
`Benchee.run/2`. Functions need to accept one argument (which is the
benchmarking suite with all data) and then use that to produce output. Used
* `formatters` - list of formatters either as a module implementing the formatter
behaviour, a tuple of said module and options it should take or formatter
functions. They are run when using `Benchee.run/2` or you can invoktem them
through `Benchee.Formatter.output/1`. Functions need to accept one argument (which
is the benchmarking suite with all data) and then use that to produce output. Used
for plugins. Defaults to the builtin console formatter
`Benchee.Formatters.Console`.
`Benchee.Formatters.Console`. See [Formatters](#formatters).
* `pre_check` - whether or not to run each job with each input - including all
given before or after scenario or each hooks - before the benchmarks are
measured to ensure that your code executes without error. This can save time
Expand Down Expand Up @@ -167,18 +168,17 @@ defmodule Benchee.Configuration do
inputs: nil,
save: false,
load: false,
formatters: [Benchee.Formatters.Console],
formatters: [
{
Benchee.Formatters.Console,
%{comparison: true, extended_statistics: false}
}
],
print: %{
benchmarking: true,
fast_warning: true,
configuration: true
},
formatter_options: %{
console: %{
comparison: true,
extended_statistics: false
}
},
percentiles: [50, 99],
unit_scaling: :best,
assigns: %{},
Expand All @@ -201,18 +201,17 @@ defmodule Benchee.Configuration do
inputs: nil,
save: false,
load: false,
formatters: [Benchee.Formatters.Console],
formatters: [
{
Benchee.Formatters.Console,
%{comparison: true, extended_statistics: false}
}
],
print: %{
benchmarking: true,
fast_warning: true,
configuration: true
},
formatter_options: %{
console: %{
comparison: true,
extended_statistics: false
}
},
percentiles: [50, 99],
unit_scaling: :best,
assigns: %{},
Expand All @@ -235,18 +234,17 @@ defmodule Benchee.Configuration do
inputs: nil,
save: false,
load: false,
formatters: [Benchee.Formatters.Console],
formatters: [
{
Benchee.Formatters.Console,
%{comparison: true, extended_statistics: false}
}
],
print: %{
benchmarking: true,
fast_warning: true,
configuration: true
},
formatter_options: %{
console: %{
comparison: true,
extended_statistics: false
}
},
percentiles: [50, 99],
unit_scaling: :best,
assigns: %{},
Expand All @@ -263,7 +261,7 @@ defmodule Benchee.Configuration do
...> parallel: 2,
...> time: 1,
...> warmup: 0.2,
...> formatters: [&IO.puts/2],
...> formatters: [&IO.puts/1],
...> print: [fast_warning: false],
...> console: [comparison: false],
...> inputs: %{"Small" => 5, "Big" => 9999},
Expand All @@ -278,7 +276,7 @@ defmodule Benchee.Configuration do
inputs: %{"Small" => 5, "Big" => 9999},
save: false,
load: false,
formatters: [&IO.puts/2],
formatters: [&IO.puts/1],
print: %{
benchmarking: true,
fast_warning: false,
Expand Down Expand Up @@ -311,6 +309,7 @@ defmodule Benchee.Configuration do
config
|> standardized_user_configuration
|> merge_with_defaults
|> formatter_options_to_tuples
|> convert_time_to_nano_s
|> update_measure_memory
|> save_option_conversion
Expand All @@ -320,7 +319,7 @@ defmodule Benchee.Configuration do

defp standardized_user_configuration(config) do
config
|> DeepConvert.to_map()
|> DeepConvert.to_map([:formatters])
|> translate_formatter_keys
|> force_string_input_keys
end
Expand All @@ -333,6 +332,19 @@ defmodule Benchee.Configuration do
DeepMerge.deep_merge(%{formatter_options: formatter_options}, config)
end

alias Benchee.Formatters.{Console, CSV, JSON, HTML}

# backwards compatible formatter definition without leaving the burden on every formatter
defp formatter_options_to_tuples(config) do
update_in(config, [Access.key(:formatters), Access.all()], fn
Console -> {Console, config.formatter_options[:console]}
CSV -> {CSV, config.formatter_options[:csv]}
JSON -> {JSON, config.formatter_options[:json]}
HTML -> {HTML, config.formatter_options[:html]}
formatter -> formatter
end)
end

defp force_string_input_keys(config = %{inputs: inputs}) do
standardized_inputs =
for {name, value} <- inputs, into: %{} do
Expand Down Expand Up @@ -394,8 +406,7 @@ defmodule Benchee.Configuration do

%__MODULE__{
config
| formatters: config.formatters ++ [Benchee.Formatters.TaggedSave],
formatter_options: Map.put(config.formatter_options, :tagged_save, tagged_save_options)
| formatters: config.formatters ++ [{Benchee.Formatters.TaggedSave, tagged_save_options}]
}
end

Expand Down