Skip to content

Commit

Permalink
Merge 4708dad into ee9f723
Browse files Browse the repository at this point in the history
  • Loading branch information
devonestes committed Nov 23, 2018
2 parents ee9f723 + 4708dad commit 6ce5197
Show file tree
Hide file tree
Showing 27 changed files with 250 additions and 287 deletions.
15 changes: 1 addition & 14 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,29 +1,16 @@
language: elixir
elixir:
- 1.4.5
- 1.5.3
- 1.6.6
- 1.7.4
otp_release:
- 19.3
- 20.3
- 21.1

matrix:
exclude:
- elixir: 1.4.5
otp_release: 21.1
- elixir: 1.5.3
otp_release: 21.1
- elixir: 1.7.4
otp_release: 19.3

before_script:
- MIX_ENV=test mix compile --warnings-as-errors
- travis_wait mix dialyzer --plt
script:
- mix credo --strict
- if [[ "$TRAVIS_ELIXIR_VERSION" == "1.6"* ]]; then mix format --check-formatted; fi
- mix format --check-formatted
- mix dialyzer --halt-exit-status
- mix safe_coveralls.travis
after_script:
Expand Down
18 changes: 13 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Add benchee to your list of dependencies in `mix.exs`:

```elixir
defp deps do
[{:benchee, "~> 0.11", only: :dev}]
[{:benchee, "~> 0.13", only: :dev}]
end
```

Expand Down Expand Up @@ -162,7 +162,7 @@ The available options are the following (also documented in [hexdocs](https://he
* `warmup` - the time in seconds for which a benchmarking job should be run without measuring times before "real" measurements start. This simulates a _"warm"_ running system. Defaults to 2.
* `time` - the time in seconds for how long each individual benchmarking job should be run for measuring the execution times (run time performance). Defaults to 5.
* `memory_time` - the time in seconds for how long [memory measurements](measuring-memory-consumption) should be conducted. Defaults to 0 (turned off).
* `inputs` - a map from descriptive input names to some different input, your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs).
* `inputs` - a map or list of two element tuples. If a map, they keys are descriptive input names and values are the actual input values. If a list of tuples, the first element in each tuple is the input name, and the second element in each tuple is the actual input value. Your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs).
* `formatters` - list of formatters either as a module implementing the formatter behaviour, a tuple of said module and options it should take or formatter functions. They are run when using `Benchee.run/2` or you can invoktem them through `Benchee.Formatter.output/1`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter `Benchee.Formatters.Console`. See [Formatters](#formatters).
* `pre_check` - whether or not to run each job with each input - including all given before or after scenario or each hooks - before the benchmarks are measured to ensure that your code executes without error. This can save time while developing your suites. Defaults to `false`.
* `parallel` - the function of each benchmarking job will be executed in `parallel` number processes. If `parallel: 4` then 4 processes will be spawned that all execute the _same_ function for the given time. When these finish/the time is up 4 new processes will be spawned for the next job/function. This gives you more data in the same time, but also puts a load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking [check the wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking). Defaults to 1 (no parallel execution).
Expand Down Expand Up @@ -218,18 +218,26 @@ A full example, including an example of the console output, can be found

### Inputs

`:inputs` is a very useful configuration that allows you to run the same benchmarking jobs with different inputs. You specify the inputs as a map from name (String or atom) to the actual input value. Functions can have different performance characteristics on differently shaped inputs - be that structure or input size.
`:inputs` is a very useful configuration that allows you to run the same benchmarking jobs with different inputs. You specify the inputs as either a map from name (String or atom) to the actual input value or a list of tuples where the first element in each tuple is the name and the second element in the tuple is the value. Functions can have different performance characteristics on differently shaped inputs - be that structure or input size.

One of such cases is comparing tail-recursive and body-recursive implementations of `map`. More information in the [repository with the benchmark](https://github.com/PragTob/elixir_playground/blob/master/bench/tco_blog_post_focussed_inputs.exs) and the [blog post](https://pragtob.wordpress.com/2016/06/16/tail-call-optimization-in-elixir-erlang-not-as-efficient-and-important-as-you-probably-think/).

```elixir
map_fun = fn(i) -> i + 1 end
inputs = %{
"Small (1 Thousand)" => Enum.to_list(1..1_000),
"Small (1 Thousand)" => Enum.to_list(1..1_000),
"Middle (100 Thousand)" => Enum.to_list(1..100_000),
"Big (10 Million)" => Enum.to_list(1..10_000_000),
"Big (10 Million)" => Enum.to_list(1..10_000_000)
}

# Or inputs could also look like this:
#
# inputs = [
# {"Small (1 Thousand)", Enum.to_list(1..1_000)},
# {"Middle (100 Thousand)", Enum.to_list(1..100_000)},
# {"Big (10 Million)", Enum.to_list(1..10_000_000)}
# ]

Benchee.run %{
"map tail-recursive" =>
fn(list) -> MyMap.map_tco(list, map_fun) end,
Expand Down
44 changes: 18 additions & 26 deletions lib/benchee.ex
Original file line number Diff line number Diff line change
Expand Up @@ -20,35 +20,27 @@ for {module, moduledoc} <- [{Benchee, elixir_doc}, {:benchee, erlang_doc}] do
alias Benchee.Formatter

@doc """
Run benchmark jobs defined by a map and optionally provide configuration
options.
Runs the given benchmarks, calculates statistics based on the results and
outputs results with the configured formatters.
Runs the given benchmarks and prints the results on the console.
* jobs - a map from descriptive benchmark job name to a function to be
executed and benchmarked
* configuration - configuration options to alter what Benchee does, see
`Benchee.Configuration.init/1` for documentation of the available options.
Benchmarks are defined as a map where the keys are a name for the given
function and the values are the functions to benchmark. Users can configure
the run by passing a keyword list as the second argument. For more
information on configuration see `Benchee.Configuration.init/1`.
## Examples
Benchee.run(%{"My Benchmark" => fn -> 1 + 1 end,
"My other benchmrk" => fn -> "1" ++ "1" end}, time: 3)
# Prints a summary of the benchmark to the console
Benchee.run(
%{
"My Benchmark" => fn -> 1 + 1 end,
"My other benchmrk" => fn -> [1] ++ [1] end
},
warmup: 2,
time: 3
)
"""
def run(jobs, config \\ [])

def run(jobs, config) when is_list(config) do
do_run(jobs, config)
end

def run(config, jobs) when is_map(jobs) do
# pre 0.6.0 way of passing in the config first and as a map
do_run(jobs, config)
end

defp do_run(jobs, config) do
@spec run(map, keyword) :: any
def run(jobs, config \\ []) when is_list(config) do
config
|> Benchee.init()
|> Benchee.system()
Expand All @@ -68,11 +60,11 @@ for {module, moduledoc} <- [{Benchee, elixir_doc}, {:benchee, erlang_doc}] do
defdelegate init(), to: Benchee.Configuration
defdelegate init(config), to: Benchee.Configuration
defdelegate system(suite), to: Benchee.System
defdelegate benchmark(suite, name, function), to: Benchee.Benchmark
defdelegate benchmark(suite, name, function, printer), to: Benchee.Benchmark
defdelegate measure(suite), to: Benchee.Benchmark
defdelegate measure(suite, printer), to: Benchee.Benchmark
defdelegate benchmark(suite, name, function), to: Benchee.Benchmark
defdelegate statistics(suite), to: Benchee.Statistics
defdelegate load(suite), to: Benchee.ScenarioLoader
defdelegate benchmark(suite, name, function, printer), to: Benchee.Benchmark
end
end
18 changes: 2 additions & 16 deletions lib/benchee/benchmark.ex
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,9 @@ defmodule Benchee.Benchmark do
Exposes `benchmark/4` and `measure/3` functions.
"""

alias Benchee.Benchmark.{Scenario, ScenarioContext, Runner}
alias Benchee.Benchmark.{Runner, Scenario, ScenarioContext}
alias Benchee.Output.BenchmarkPrinter, as: Printer
alias Benchee.Suite
alias Benchee.Utility.DeepConvert
alias Benchee.{Suite, Utility.DeepConvert}

@type job_name :: String.t() | atom
@no_input :__no_input
Expand Down Expand Up @@ -48,19 +47,6 @@ defmodule Benchee.Benchmark do
%Suite{suite | scenarios: List.flatten([scenarios | new_scenarios])}
end

defp build_scenarios_for_job(job_name, function, config)

defp build_scenarios_for_job(job_name, function, nil) do
[
build_scenario(%{
job_name: job_name,
function: function,
input: @no_input,
input_name: @no_input
})
]
end

defp build_scenarios_for_job(job_name, function, %{inputs: nil}) do
[
build_scenario(%{
Expand Down
2 changes: 1 addition & 1 deletion lib/benchee/benchmark/repeated_measurement.ex
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ defmodule Benchee.Benchmark.RepeatedMeasurement do
# with too high variance. Therefore determine an n how often it should be
# executed in the measurement cycle.

alias Benchee.Benchmark.{Hooks, Runner, Scenario, ScenarioContext, Measure}
alias Benchee.Benchmark.{Hooks, Measure, Runner, Scenario, ScenarioContext}
alias Benchee.Utility.RepeatN

@minimum_execution_time 10
Expand Down
8 changes: 2 additions & 6 deletions lib/benchee/benchmark/runner.ex
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,8 @@ defmodule Benchee.Benchmark.Runner do
# This module actually runs our benchmark scenarios, adding information about
# run time and memory usage to each scenario.

alias Benchee.Benchmark
alias Benchee.Benchmark.{Scenario, ScenarioContext, Measure, Hooks, RepeatedMeasurement}
alias Benchee.Configuration
alias Benchee.Conversion
alias Benchee.Statistics
alias Benchee.Utility.Parallel
alias Benchee.{Benchmark, Configuration, Conversion, Statistics, Utility.Parallel}
alias Benchmark.{Hooks, Measure, RepeatedMeasurement, Scenario, ScenarioContext}

@doc """
Executes the benchmarks defined before by first running the defined functions
Expand Down
97 changes: 23 additions & 74 deletions lib/benchee/configuration.ex
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,20 @@ defmodule Benchee.Configuration do
"""

alias Benchee.{
Suite,
Configuration,
Conversion.Duration,
Conversion.Scale,
Utility.DeepConvert,
Formatters.Console
Formatters.Console,
Suite,
Utility.DeepConvert
}

defstruct parallel: 1,
time: 5,
warmup: 2,
memory_time: 0.0,
pre_check: false,
formatters: [Console],
formatters: [{Console, %{comparison: true, extended_statistics: false}}],
percentiles: [50, 99],
print: %{
benchmarking: true,
Expand All @@ -27,16 +27,7 @@ defmodule Benchee.Configuration do
inputs: nil,
save: false,
load: false,
# formatters should end up here but known once are still picked up at
# the top level for now
formatter_options: %{
console: %{
comparison: true,
extended_statistics: false
}
},
unit_scaling: :best,
# If you/your plugin/whatever needs it your data can go here
assigns: %{},
before_each: nil,
after_each: nil,
Expand All @@ -51,12 +42,11 @@ defmodule Benchee.Configuration do
warmup: number,
memory_time: number,
pre_check: boolean,
formatters: [(Suite.t() -> Suite.t())],
formatters: [(Suite.t() -> Suite.t()) | {atom, map}],
print: map,
inputs: %{Suite.key() => any} | nil,
inputs: %{Suite.key() => any} | [{String.t(), any}] | nil,
save: map | false,
load: String.t() | [String.t()] | false,
formatter_options: map,
unit_scaling: Scale.scaling_strategy(),
assigns: map,
before_each: fun | nil,
Expand Down Expand Up @@ -263,17 +253,15 @@ defmodule Benchee.Configuration do
...> warmup: 0.2,
...> formatters: [&IO.puts/1],
...> print: [fast_warning: false],
...> console: [comparison: false],
...> inputs: %{"Small" => 5, "Big" => 9999},
...> formatter_options: [some: "option"],
...> unit_scaling: :smallest)
%Benchee.Suite{
configuration:
%Benchee.Configuration{
parallel: 2,
time: 1_000_000_000.0,
warmup: 200_000_000.0,
inputs: %{"Small" => 5, "Big" => 9999},
inputs: [{"Big", 9999}, {"Small", 5}],
save: false,
load: false,
formatters: [&IO.puts/1],
Expand All @@ -282,13 +270,6 @@ defmodule Benchee.Configuration do
fast_warning: false,
configuration: true
},
formatter_options: %{
console: %{
comparison: false,
extended_statistics: false
},
some: "option"
},
percentiles: [50, 99],
unit_scaling: :smallest,
assigns: %{},
Expand All @@ -309,7 +290,6 @@ defmodule Benchee.Configuration do
config
|> standardized_user_configuration
|> merge_with_defaults
|> formatter_options_to_tuples
|> convert_time_to_nano_s
|> update_measure_memory
|> save_option_conversion
Expand All @@ -319,45 +299,23 @@ defmodule Benchee.Configuration do

defp standardized_user_configuration(config) do
config
|> DeepConvert.to_map([:formatters])
|> translate_formatter_keys
|> DeepConvert.to_map([:formatters, :inputs])
|> force_string_input_keys
end

# backwards compatible translation of formatter keys to go into
# formatter_options now
@formatter_keys [:console, :csv, :json, :html]
defp translate_formatter_keys(config) do
{formatter_options, config} = Map.split(config, @formatter_keys)
DeepMerge.deep_merge(%{formatter_options: formatter_options}, config)
end

alias Benchee.Formatters.{Console, CSV, JSON, HTML}

# backwards compatible formatter definition without leaving the burden on every formatter
defp formatter_options_to_tuples(config) do
update_in(config, [Access.key(:formatters), Access.all()], fn
Console -> formatter_configuration_from_options(config, Console, :console)
CSV -> formatter_configuration_from_options(config, CSV, :csv)
JSON -> formatter_configuration_from_options(config, JSON, :json)
HTML -> formatter_configuration_from_options(config, HTML, :html)
formatter -> formatter
end)
end

defp formatter_configuration_from_options(config, module, legacy_option_key) do
if Map.has_key?(config.formatter_options, legacy_option_key) do
{module, config.formatter_options[legacy_option_key]}
else
module
end
end

defp force_string_input_keys(config = %{inputs: inputs}) do
standardized_inputs =
for {name, value} <- inputs, into: %{} do
{to_string(name), value}
end
inputs
|> Enum.reduce([], fn {name, value}, acc ->
normalized_name = to_string(name)

if List.keymember?(acc, normalized_name, 0) do
acc
else
[{normalized_name, value} | acc]
end
end)
|> Enum.reverse()

%{config | inputs: standardized_inputs}
end
Expand Down Expand Up @@ -400,22 +358,13 @@ defmodule Benchee.Configuration do
""")
end

defp save_option_conversion(config = %{save: false}) do
config
end
defp save_option_conversion(config = %{save: false}), do: config

defp save_option_conversion(config = %{save: save_values}) do
save_options = Map.merge(save_defaults(), save_values)

tagged_save_options = %{
tag: save_options.tag,
path: save_options.path
}

%__MODULE__{
config
| formatters: config.formatters ++ [{Benchee.Formatters.TaggedSave, tagged_save_options}]
}
tagged_save_options = %{tag: save_options.tag, path: save_options.path}
formatters = config.formatters ++ [{Benchee.Formatters.TaggedSave, tagged_save_options}]
%__MODULE__{config | formatters: formatters}
end

defp save_defaults do
Expand Down
2 changes: 1 addition & 1 deletion lib/benchee/conversion.ex
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ defmodule Benchee.Conversion do
"""

alias Benchee.Benchmark.Scenario
alias Benchee.Conversion.{Duration, Count, Memory}
alias Benchee.Conversion.{Count, Duration, Memory}

@doc """
Takes scenarios and a given scaling_strategy, returns the best units for the
Expand Down

0 comments on commit 6ce5197

Please sign in to comment.