New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add `dry_run` functionality #178

Merged
merged 5 commits into from Feb 2, 2018

Conversation

Projects
None yet
4 participants
@devonestes
Collaborator

devonestes commented Jan 29, 2018

This adds a new feature for a dry run, which will execute each scenario
(including before and after hooks) to make sure all your scenarios will
run without exception before doing the actual benchmark. This isn't
counted towards the run times of any benchmarks.

Closes #177

devonestes added some commits Jan 29, 2018

Add `dry_run` functionality
This adds a new feature for a dry run, which will execute each scenario
(including before and after hooks) to make sure all your scenarios will
run without exception before doing the actual benchmark. This isn't
counted towards the run times of any benchmarks.

I also ran the formatter on files I touched while adding this feature.
Add config for Credo
Now that the official Elixir formatter has a line length of 96, we need
to make sure Credo doesn't complain about that.
@PragTob

jeez, benchee has a lot of code by now :D

First thanks looks good 💚 I think dry run needs to be positioned at another point to achieve the intended affect and a couple of other things I commented inline.

Other than that I think we should really get the formatter changes in separate PRs - a lot of clutter especially here that made reviewing quite tricky. I know I've certainly done this myself before but going forward keeping it separate is probably better :)

img_20180128_112622

@@ -1,2 +1,2 @@
elixir 1.6.0
elixir master

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

I think we should stick to released versions here :D

This comment has been minimized.

@devonestes

devonestes Jan 31, 2018

Collaborator

I've been working on elixir master so I can get the yellow highlighting for skipped tests. Is there a way for me to have a local override to what's checked into git here? If not I can just be sure not to check my changes in - not that big of an issue.

This comment has been minimized.

@PragTob

PragTob Jan 31, 2018

Owner

Hmmm... I guess not. Or not that I know of :(

### Features (User Facing)
* new `dry_run` configuration option which allows users to add a dry run of all
benchmarks with each input before running the actual suite. This should save
time while actually writing the code for your benchmarks.

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

one line is indented the other one isn't

## 0.13.0 (2018-??-??)
Adds the ability to run a `dry_run` of your benchmarks if you want to make sure
everything will run without error before running the full set of benchmarks.

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

Usually I just write the summary before releasing - imo no need to write it upfront but certainl isn't bad :)

README.md Outdated
@@ -143,6 +143,7 @@ The available options are the following (also documented in [hexdocs](https://he
* `warmup` - the time in seconds for which a benchmarking job should be run without measuring times before real measurements start. This simulates a _"warm"_ running system. Defaults to 2.
* `time` - the time in seconds for how long each individual benchmarking job should be run and measured. Defaults to 5.
* `dry_run` - whether or not to run each job with each input to ensure that your code executes without error. This can save time while developing your suites.

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

needs something about before the entire suite executes to be clearer and a Defaults to false would be nice I think :)

printer.benchmarking(job_name, input_name, config)
dry_run(scenario, scenario_context)

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

I don't think this is the right place? As I understood it dry_run should execute once for all scenarios before everything is benchmarked, this is specific for one scenario already if I'm not mistaken.

available_memory: "8 Trillion",
cpu_speed: "light speed"
cpu_speed: "light speed"

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

is the formatter really actively disallowing aligning the values of a map? If so, this might be my breaking point to actually go and lobby for this. It increases readability so much imo.

This comment has been minimized.

@devonestes

devonestes Jan 31, 2018

Collaborator

Dave Thomas is already pushing pretty hard for that change in the formatter: elixir-lang/elixir#6647

This comment has been minimized.

@PragTob

This comment has been minimized.

@PragTob

PragTob Jan 31, 2018

Owner

Commented thanks for showing me @devonestes 👼

{fn -> :timer.sleep(1) end,
before_each: fn input ->
send(me, :before)
input

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

does it complain about the ; ? I know it's not great but I'd say the version above is a ton more readable than this here :(

This comment has been minimized.

@devonestes

devonestes Jan 31, 2018

Collaborator

Yep - all functions with more than one clause go on multiple lines as of right now. I haven't seen anyone propose allowing ;, though, so maybe that's something they'd consider.

This comment has been minimized.

@PragTob

PragTob Jan 31, 2018

Owner

I don't wanna be the guy known for wanting to allow that. I agree that in 99% of the cases it's bad but here with the whole alignment it seemed nice

:before_scenario,
:before,
:after,
:after_scenario

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

we aren't allowed to put all these in a line anymore? :'(

This comment has been minimized.

@devonestes

devonestes Jan 31, 2018

Collaborator

We can put them in one line, but because we had it over multiple lines before it did this instead of pulling it all up to a single line.

inputs = %{"small" => 1, "big" => 100}
config = %{time: 0, warmup: 0, inputs: inputs, dry_run: true}

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

if we wanna catch what I think I identified as a bug above (dry run should run completely before anything "real" is run) then we could give it warmup and time and make the last function crash (or add a function that crashes). Then we can assert that exactly these ones were received.

This comment has been minimized.

@devonestes

devonestes Jan 31, 2018

Collaborator

You know what - I'll put an error in the after_scenario and make sure we only get the before_each and after_each hooks once. That should do it!

|> Benchmark.measure(TestPrinter)
assert_received_exactly([{:first, 100}, {:first, 1}, {:second, 100}, {:second, 1}])
end

This comment has been minimized.

@PragTob

PragTob Jan 29, 2018

Owner

I think that all the hooks trigger around a fry run is important enough to spec in an individual test, what do you think? :)

This comment has been minimized.

@devonestes

devonestes Jan 31, 2018

Collaborator

Yeah, totally. I'll add that as well.

@OvermindDL1

This comment has been minimized.

OvermindDL1 commented Jan 29, 2018

dry_run to me implies that the benchmarking will not be performed at all, rather it will just do a quick test run of each and exit. I'd name it something like test_run_at_start or so.

@devonestes

This comment has been minimized.

Collaborator

devonestes commented Jan 31, 2018

Yeah, sorry about the formatting all lumped in with the other changes. Muscle memory is creeping in for me I guess. I lean pretty heavily on the formatter these days to keep things pretty!

Implement feedback and add additional test
This adds a test to make sure callbacks are run in the dry runs, and
also makes sure it's working properly.
@PragTob

This comment has been minimized.

Owner

PragTob commented Jan 31, 2018

Also regarding what @OvermindDL1 said - seems logical now that you say it. In the unix shell dry run is mostly associated with simulating it but NOT doing the actual work.

@devonestes how do you feel about pre_run or so? That is short but still sort of easy to understand. test_run_at_start is a bit long for my tast, I like understandable variable names but for options I tend to having it shorter :)

@OvermindDL1

This comment has been minimized.

OvermindDL1 commented Jan 31, 2018

I'm +1 for pre_run.

@PragTob

Looks great to me only thing that remains is discussing the potential naming change.

Thanks a ton!

img_20180128_112506

parallel_benchmark(scenario, scenario_context)
end)
Enum.each(scenarios, fn scenario -> dry_run(scenario, scenario_context) end)
Enum.map(scenarios, fn scenario -> parallel_benchmark(scenario, scenario_context) end)

This comment has been minimized.

@PragTob
@devonestes

This comment has been minimized.

Collaborator

devonestes commented Feb 1, 2018

For some reason pre_run doesn't make sense to me - it seems incomplete, like, what are we doing pre-run?

Maybe pre-check?

@PragTob

This comment has been minimized.

Owner

PragTob commented Feb 1, 2018

pre_check / pre-check is fine by me (we don't use - anywhere else in options though)

@devonestes

This comment has been minimized.

Collaborator

devonestes commented Feb 2, 2018

Cool, I slept on it and I still like pre_check, so I went ahead and updated this to use that name.

@PragTob

PragTob approved these changes Feb 2, 2018

😍

@PragTob PragTob merged commit c1349fc into master Feb 2, 2018

3 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details
coverage/coveralls Coverage increased (+0.02%) to 94.954%
Details

@PragTob PragTob deleted the dry-run branch Feb 2, 2018

@michalmuskala

This comment has been minimized.

Contributor

michalmuskala commented Feb 2, 2018

❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment