New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/go: 'go test' should dynamically compute best timeout #12446

Open
dsnet opened this Issue Sep 2, 2015 · 4 comments

Comments

Projects
None yet
4 participants
@dsnet
Member

dsnet commented Sep 2, 2015

Using go1.5

I discovered that "go test" crashes after the timeout of 600s has passed. I was running all the benchmarks in compress/flate and I set -benchtime 10s. Since each benchmark will run for about 10s or more, and there are 36 benchmarks in that package, this occupies at least 360s, which is running close to the default of 600s.

In situations like these, the go test tool should be able to figure out that a timeout of 600s is cutting things close when running 36 benchmarks each for 10s and choose a timeout that is more appropriate.

@dsnet

This comment has been minimized.

Member

dsnet commented Sep 2, 2015

One possible timeout is: max(10*time.Minute, 5*benchTime*numBenchmarks)

The 5x increase was chosen since it seems the benchmark follow the following progression: 1x, 2x, 5x, 10x, 20x, 50x, etc. Thus, at worst, the last iteration of a Benchmark may run for 2.5x longer than targeted time. Also, since the summation of all the iterations of a given Benchmark resembles a geometric series where r=1/2, we get a another factor of 2x. Thus, 2*2.5 = 5x.

@dsnet

This comment has been minimized.

Member

dsnet commented Oct 4, 2016

The addition of sub-benchmarks has made this feature harder to implement (if not impractical) since the number of benchmarks is not known ahead of time.

@davecheney

This comment has been minimized.

Contributor

davecheney commented Oct 5, 2016

I don't think this bug should be addressed. That is to say, I don't think we should change the go tool as this will enable people to continue to do the wrong thing.

To accurately benchmark you should have a before and after sample and run those both several times back to back to rule out thermal variation. This is what the -c flag gives you, a binary that you can run with whatever timeout and count flags you need on dedicated benchmarking hardware.

Working around this inside go test encourages people to benchmark on shared cloud Ci hardware, which is inaccurate.

@josharian

This comment has been minimized.

Contributor

josharian commented Apr 10, 2017

See also #19394.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment