New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing: investigate possible bias due to time estimation #27168

Open
josharian opened this Issue Aug 23, 2018 · 1 comment

Comments

Projects
None yet
3 participants
@josharian
Contributor

josharian commented Aug 23, 2018

In #24735 (comment) I wrote:

Suppose we have a benchmark with high variance. We use our estimates to try to get near the benchtime. We are more likely to exceed the benchtime if we get a particularly slow run. A particularly fast run is more likely to trigger another benchmark run.
The current approach thus introduces bias.
One simple way to fix this would be to decide when our estimate is going to be "close enough", that is, when we are one iteration way from being done, and then stick with that final iteration even if it falls short of the benchtime.

This issue is to follow up on this concern, independently of #24735, which is really about something else.

@rsc

This comment has been minimized.

Contributor

rsc commented Sep 26, 2018

I don't see a decision here. Moving to NeedsInvestigation (probably by Josh).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment