Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/go: 'go test' should dynamically compute best timeout #12446

Open
dsnet opened this issue Sep 2, 2015 · 6 comments

Comments

@dsnet
Copy link
Member

commented Sep 2, 2015

Using go1.5

I discovered that "go test" crashes after the timeout of 600s has passed. I was running all the benchmarks in compress/flate and I set -benchtime 10s. Since each benchmark will run for about 10s or more, and there are 36 benchmarks in that package, this occupies at least 360s, which is running close to the default of 600s.

In situations like these, the go test tool should be able to figure out that a timeout of 600s is cutting things close when running 36 benchmarks each for 10s and choose a timeout that is more appropriate.

@dsnet

This comment has been minimized.

Copy link
Member Author

commented Sep 2, 2015

One possible timeout is: max(10*time.Minute, 5*benchTime*numBenchmarks)

The 5x increase was chosen since it seems the benchmark follow the following progression: 1x, 2x, 5x, 10x, 20x, 50x, etc. Thus, at worst, the last iteration of a Benchmark may run for 2.5x longer than targeted time. Also, since the summation of all the iterations of a given Benchmark resembles a geometric series where r=1/2, we get a another factor of 2x. Thus, 2*2.5 = 5x.

@dsnet

This comment has been minimized.

Copy link
Member Author

commented Oct 4, 2016

The addition of sub-benchmarks has made this feature harder to implement (if not impractical) since the number of benchmarks is not known ahead of time.

@davecheney

This comment has been minimized.

Copy link
Contributor

commented Oct 5, 2016

I don't think this bug should be addressed. That is to say, I don't think we should change the go tool as this will enable people to continue to do the wrong thing.

To accurately benchmark you should have a before and after sample and run those both several times back to back to rule out thermal variation. This is what the -c flag gives you, a binary that you can run with whatever timeout and count flags you need on dedicated benchmarking hardware.

Working around this inside go test encourages people to benchmark on shared cloud Ci hardware, which is inaccurate.

@josharian

This comment has been minimized.

Copy link
Contributor

commented Apr 10, 2017

See also #19394.

@bcmills

This comment has been minimized.

Copy link
Member

commented Jan 23, 2019

The addition of sub-benchmarks has made this feature harder to implement (if not impractical) since the number of benchmarks is not known ahead of time.

One option would be to apply the timeout to only the test portion, and allow the benchmarks to continue running for -benchtime each. (For example, cmd/go could look for the messages that indicate the beginning and end of benchmarks and pause the timeout during that interval.)

@josharian

This comment has been minimized.

Copy link
Contributor

commented Jan 23, 2019

allow the benchmarks to continue running for -benchtime each

That sounds eminently sensible. Unfortunately, -benchtime measures the time the benchmark timer is running, not the time the benchmark code is running. So any benchmark that does non-trivial work using StartTimer and StopTimer will run afoul of this. Also, benchmark warm-up and over-estimation commonly caused running a benchmark without StartTimer/StopTimer/ResetTime to take multiples of the -benchtime. (I want to improve that, e.g.
#27168 (comment), but no fix will ever be complete.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.