Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Microbenchmarking for Erlang

branch: master

Fetching latest commit…

Octocat-spinner-32-eaf2f5

Cannot retrieve the latest commit at this time

Octocat-spinner-32 examples
Octocat-spinner-32 include
Octocat-spinner-32 src
Octocat-spinner-32 .gitignore
Octocat-spinner-32 LICENSE
Octocat-spinner-32 README.md
Octocat-spinner-32 rebar
Octocat-spinner-32 rebar.config
README.md

emark

Lightweight benchmarking framework for Erlang.

It is a plugin for rebar which benchmarks your code.

What it looks like

==> bench0 (emark)
calc_something/0    20000   38.1 µs/op
parse_omg_wtf/1 500000  1.6 µs/op

benchmark                      old µs/op new µs/op   delta
calc_something/0                    37.7      38.0  +0.79%
parse_omg_wtf/1                      1.6       1.5  -6.67%

Usage example

There is an example in examples/bench0 subdirectory.

cd examples/bench0
mkdir -p deps
ln -s `pwd`/../../ deps/emark
../../rebar compile
../../rebar emark

Details

emark works almost like eunit.

-include_lib("emark/include/emark.hrl").

-ifdef(BENCHMARK).

my_function_benchmark(N) ->
  Input = prepare_n_inputs(N),
  emark:start(?MODULE, my_function, 1),
  lists:seq(fun(E) -> _ = my_function(E) end, Input).

-endif.

The main difference is the emark:start call. It starts tracing the specified function, so it then knows how many times it was really called while benchmark was running. It also starts the actual timer, so preparation should be done before calling it.

emark prints benchmark results to the terminal. First column is the name of the function and its arity, second column is the number of iterations during the benchmark, third column is the average time it took the function to run (microseconds).

By default, emark also saves a report to a file under .emark/ directory. It's used on next run to show a difference between runs. The idea behind emark is that you run benchmark (rebar emark), then change the code of some important function, and rerun benchmark to see if anything went better (or worse).

Something went wrong with that request. Please try again.