Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't tune manually set evals parameter #318

Merged
merged 4 commits into from Jun 20, 2023
Merged

Don't tune manually set evals parameter #318

merged 4 commits into from Jun 20, 2023

Conversation

gdalle
Copy link
Collaborator

@gdalle gdalle commented Jun 14, 2023

This PR add a boolean evals_set field to benchmark parameters for manually set evals. If evals_set = true, tuning does nothing.

Fixes #24 and related issues

@gdalle gdalle requested a review from jrevels June 14, 2023 13:19
@gdalle gdalle requested review from vchuravy and vtjnash June 14, 2023 13:26
@vtjnash vtjnash removed their request for review June 14, 2023 14:50
Copy link

@carstenbauer carstenbauer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, LGTM. (But I'm not a maintainer.)

@gdalle
Copy link
Collaborator Author

gdalle commented Jun 20, 2023

(But I'm not a maintainer.)

I'm a maintainer but I have only been one for a few days 🤣

@gdalle gdalle merged commit 7d37634 into JuliaCI:master Jun 20, 2023
9 checks passed
@@ -79,7 +79,7 @@ You can pass the following keyword arguments to `@benchmark`, `@benchmarkable`,

- `samples`: The number of samples to take. Execution will end if this many samples have been collected. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.samples = 10000`.
- `seconds`: The number of seconds budgeted for the benchmarking process. The trial will terminate if this time is exceeded (regardless of `samples`), but at least one sample will always be taken. In practice, actual runtime can overshoot the budget by the duration of a sample. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.seconds = 5`.
- `evals`: The number of evaluations per sample. For best results, this should be kept consistent between trials. A good guess for this value can be automatically set on a benchmark via `tune!`, but using `tune!` can be less consistent than setting `evals` manually. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.evals = 1`.
- `evals`: The number of evaluations per sample. For best results, this should be kept consistent between trials. A good guess for this value can be automatically set on a benchmark via `tune!`, but using `tune!` can be less consistent than setting `evals` manually (which bypasses tuning). Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.evals = 1`. If the function you study mutates its input, it is probably a good idea to set `evals=1` manually.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a clearer API might be to introduce a special value like evals=nothing to indicate an unset evals.

The text could then something look like this:

  • evals: The number of evaluations per sample. For best results, this should be kept consistent between trials. If not specified manually, defaults to BenchmarkTools.DEFAULT_PARAMETERS.evals = nothing and tune! is used to automatically set the value to a good guess for the benchmark.

    Functions which are sensitive to additional un-benchmarked evaluations (e.g. mutating functions), can set evals manually to ensure no automatic tuning is performed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it would be clearer, and that is what @vchuravy suggested at first. There are two reasons I chose the evals_set::Bool implementation instead:

  • It's a bit convoluted to remember that evals=nothing really means evals=1
  • Much of the code relies on a numeric value for evals, so my choice was a way of fixing these numerous bugs without checking every time if isnothing(evals)

@filchristou
Copy link

@gdalle @vchuravy is there anything specific keeping a release that will contain this PR ?
Navigating through the source code in github, it was confusing that evals_set wasn't available in the release.

@gdalle
Copy link
Collaborator Author

gdalle commented Nov 22, 2023

feel free to weigh in here
#344

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants