Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finer control on the duration/significance of criterium runs #9

Open
davidsantiago opened this issue Jul 27, 2012 · 5 comments
Open

Comments

@davidsantiago
Copy link
Collaborator

As discussed in issue #8, it would be helpful to be able to control more finely the amount of time criterium runs benchmarks.

The gap between "quick" and "full" can be enormous. Having more ways available to pinpoint the statistical sampling parameters in criterium would really help in making it a tool you can run to understand your code's performance changes without slowing you down too much. I have some benchmarks that take over a second to run a single iteration, while others will run thousands and thousands of times in a benchmark run due to the minimum time requirements. Being able to independently set "minimum runs", "maximum runs", "minimum time", "maximum time" parameters would be great. Or at least having a few more levels between quick and full. Or even better, break out the independent parameters and then give helpful names to a few stock configurations for convenience.

@xpe
Copy link

xpe commented Apr 4, 2014

I think the suggestions by @davidsantiago would be helpful.

@hugoduncan Are you suggesting that we read over those papers to get ideas on how to expose additional tuning parameters? Are you saying more? Perhaps it is not clear on how to do so?

@RickMoynihan
Copy link

👍 to @davidsantiago's suggestions.

I've been playing with perforate, and was wanting to start using it to get a rough idea of performance across multiple services.

Whilst I appreciate criterium's efforts to support micro benchmarking on the JVM, it seems that perforate even with --quick is doing too many runs for me (e.g. when benchmarking an operation that normally takes 10 minutes), in some circumstances just one or two runs are probably enough for some tests - so being able to configure this flexibly through criterium and then exposing it to perforate would be useful.

I appreciate that perhaps this is a different usecase, but having a crude sledgehammer to hard code the number of test runs would be really useful.

@didibus
Copy link

didibus commented Nov 21, 2016

A way to know how long we allow the benchmark to run at most would be useful. Sometimes I start a benchmark on a function that takes 1 or 2 second to run and criterium runs to a point that I don't even have the patient to wait for. While other times, it takes only a few second to finish.

So if there was a way to say, run at most 10 second, and do your best to achieve reliable results within that time frame, it would be a great feature.

@KingMob
Copy link

KingMob commented Jun 14, 2024

This would be helpful. I started using criterium for some long-running fns, but if the fns take long enough, the warmup period becomes unpredictable and can stretch out for a looooong time, because the JIT stabilization process isn't too smart. (Based on what Java exposes, it can't be.) Parameters to more finely control this (or at least a max time to warmup/stabilize) would be helpful.

As is, I have to stop using criterium in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants