New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't tune manually set evals parameter #318
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, LGTM. (But I'm not a maintainer.)
I'm a maintainer but I have only been one for a few days 🤣 |
@@ -79,7 +79,7 @@ You can pass the following keyword arguments to `@benchmark`, `@benchmarkable`, | |||
|
|||
- `samples`: The number of samples to take. Execution will end if this many samples have been collected. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.samples = 10000`. | |||
- `seconds`: The number of seconds budgeted for the benchmarking process. The trial will terminate if this time is exceeded (regardless of `samples`), but at least one sample will always be taken. In practice, actual runtime can overshoot the budget by the duration of a sample. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.seconds = 5`. | |||
- `evals`: The number of evaluations per sample. For best results, this should be kept consistent between trials. A good guess for this value can be automatically set on a benchmark via `tune!`, but using `tune!` can be less consistent than setting `evals` manually. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.evals = 1`. | |||
- `evals`: The number of evaluations per sample. For best results, this should be kept consistent between trials. A good guess for this value can be automatically set on a benchmark via `tune!`, but using `tune!` can be less consistent than setting `evals` manually (which bypasses tuning). Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.evals = 1`. If the function you study mutates its input, it is probably a good idea to set `evals=1` manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a clearer API might be to introduce a special value like evals=nothing
to indicate an unset evals.
The text could then something look like this:
evals
: The number of evaluations per sample. For best results, this should be kept consistent between trials. If not specified manually, defaults toBenchmarkTools.DEFAULT_PARAMETERS.evals = nothing
andtune!
is used to automatically set the value to a good guess for the benchmark.Functions which are sensitive to additional un-benchmarked evaluations (e.g. mutating functions), can set
evals
manually to ensure no automatic tuning is performed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it would be clearer, and that is what @vchuravy suggested at first. There are two reasons I chose the evals_set::Bool
implementation instead:
- It's a bit convoluted to remember that
evals=nothing
really meansevals=1
- Much of the code relies on a numeric value for
evals
, so my choice was a way of fixing these numerous bugs without checking every timeif isnothing(evals)
feel free to weigh in here |
This PR add a boolean
evals_set
field to benchmark parameters for manually set evals. Ifevals_set = true
, tuning does nothing.Fixes #24 and related issues