Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dry_run functionality #178

Merged
merged 5 commits into from Feb 2, 2018
Merged

Add dry_run functionality #178

merged 5 commits into from Feb 2, 2018

Conversation

devonestes
Copy link
Collaborator

This adds a new feature for a dry run, which will execute each scenario
(including before and after hooks) to make sure all your scenarios will
run without exception before doing the actual benchmark. This isn't
counted towards the run times of any benchmarks.

Closes #177

This adds a new feature for a dry run, which will execute each scenario
(including before and after hooks) to make sure all your scenarios will
run without exception before doing the actual benchmark. This isn't
counted towards the run times of any benchmarks.

I also ran the formatter on files I touched while adding this feature.
Now that the official Elixir formatter has a line length of 96, we need
to make sure Credo doesn't complain about that.
Copy link
Member

@PragTob PragTob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jeez, benchee has a lot of code by now :D

First thanks looks good 💚 I think dry run needs to be positioned at another point to achieve the intended affect and a couple of other things I commented inline.

Other than that I think we should really get the formatter changes in separate PRs - a lot of clutter especially here that made reviewing quite tricky. I know I've certainly done this myself before but going forward keeping it separate is probably better :)

img_20180128_112622

.tool-versions Outdated
@@ -1,2 +1,2 @@
elixir 1.6.0
elixir master
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should stick to released versions here :D

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been working on elixir master so I can get the yellow highlighting for skipped tests. Is there a way for me to have a local override to what's checked into git here? If not I can just be sure not to check my changes in - not that big of an issue.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm... I guess not. Or not that I know of :(

CHANGELOG.md Outdated
### Features (User Facing)
* new `dry_run` configuration option which allows users to add a dry run of all
benchmarks with each input before running the actual suite. This should save
time while actually writing the code for your benchmarks.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one line is indented the other one isn't

CHANGELOG.md Outdated
## 0.13.0 (2018-??-??)

Adds the ability to run a `dry_run` of your benchmarks if you want to make sure
everything will run without error before running the full set of benchmarks.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually I just write the summary before releasing - imo no need to write it upfront but certainl isn't bad :)

README.md Outdated
@@ -143,6 +143,7 @@ The available options are the following (also documented in [hexdocs](https://he

* `warmup` - the time in seconds for which a benchmarking job should be run without measuring times before real measurements start. This simulates a _"warm"_ running system. Defaults to 2.
* `time` - the time in seconds for how long each individual benchmarking job should be run and measured. Defaults to 5.
* `dry_run` - whether or not to run each job with each input to ensure that your code executes without error. This can save time while developing your suites.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs something about before the entire suite executes to be clearer and a Defaults to false would be nice I think :)

printer.benchmarking(job_name, input_name, config)
dry_run(scenario, scenario_context)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the right place? As I understood it dry_run should execute once for all scenarios before everything is benchmarked, this is specific for one scenario already if I'm not mistaken.

available_memory: "8 Trillion",
cpu_speed: "light speed"
cpu_speed: "light speed"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the formatter really actively disallowing aligning the values of a map? If so, this might be my breaking point to actually go and lobby for this. It increases readability so much imo.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dave Thomas is already pushing pretty hard for that change in the formatter: elixir-lang/elixir#6647

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented thanks for showing me @devonestes 👼

{fn -> :timer.sleep(1) end,
before_each: fn input ->
send(me, :before)
input
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it complain about the ; ? I know it's not great but I'd say the version above is a ton more readable than this here :(

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep - all functions with more than one clause go on multiple lines as of right now. I haven't seen anyone propose allowing ;, though, so maybe that's something they'd consider.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't wanna be the guy known for wanting to allow that. I agree that in 99% of the cases it's bad but here with the whole alignment it seemed nice

:before_scenario,
:before,
:after,
:after_scenario
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we aren't allowed to put all these in a line anymore? :'(

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can put them in one line, but because we had it over multiple lines before it did this instead of pulling it all up to a single line.


inputs = %{"small" => 1, "big" => 100}

config = %{time: 0, warmup: 0, inputs: inputs, dry_run: true}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we wanna catch what I think I identified as a bug above (dry run should run completely before anything "real" is run) then we could give it warmup and time and make the last function crash (or add a function that crashes). Then we can assert that exactly these ones were received.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You know what - I'll put an error in the after_scenario and make sure we only get the before_each and after_each hooks once. That should do it!

|> Benchmark.measure(TestPrinter)

assert_received_exactly([{:first, 100}, {:first, 1}, {:second, 100}, {:second, 1}])
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that all the hooks trigger around a fry run is important enough to spec in an individual test, what do you think? :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, totally. I'll add that as well.

@OvermindDL1
Copy link

dry_run to me implies that the benchmarking will not be performed at all, rather it will just do a quick test run of each and exit. I'd name it something like test_run_at_start or so.

@devonestes
Copy link
Collaborator Author

Yeah, sorry about the formatting all lumped in with the other changes. Muscle memory is creeping in for me I guess. I lean pretty heavily on the formatter these days to keep things pretty!

@devonestes devonestes force-pushed the dry-run branch 2 times, most recently from cd22acd to e7bae71 Compare January 31, 2018 10:25
This adds a test to make sure callbacks are run in the dry runs, and
also makes sure it's working properly.
@PragTob
Copy link
Member

PragTob commented Jan 31, 2018

Also regarding what @OvermindDL1 said - seems logical now that you say it. In the unix shell dry run is mostly associated with simulating it but NOT doing the actual work.

@devonestes how do you feel about pre_run or so? That is short but still sort of easy to understand. test_run_at_start is a bit long for my tast, I like understandable variable names but for options I tend to having it shorter :)

@OvermindDL1
Copy link

I'm +1 for pre_run.

Copy link
Member

@PragTob PragTob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me only thing that remains is discussing the potential naming change.

Thanks a ton!

img_20180128_112506

parallel_benchmark(scenario, scenario_context)
end)
Enum.each(scenarios, fn scenario -> dry_run(scenario, scenario_context) end)
Enum.map(scenarios, fn scenario -> parallel_benchmark(scenario, scenario_context) end)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😍

@devonestes
Copy link
Collaborator Author

For some reason pre_run doesn't make sense to me - it seems incomplete, like, what are we doing pre-run?

Maybe pre-check?

@PragTob
Copy link
Member

PragTob commented Feb 1, 2018

pre_check / pre-check is fine by me (we don't use - anywhere else in options though)

@devonestes
Copy link
Collaborator Author

Cool, I slept on it and I still like pre_check, so I went ahead and updated this to use that name.

Copy link
Member

@PragTob PragTob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😍

@PragTob PragTob merged commit c1349fc into master Feb 2, 2018
@PragTob PragTob deleted the dry-run branch February 2, 2018 16:05
@michalmuskala
Copy link
Contributor

❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants