Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
testing: tiny benchmark with StopTimer runs forever #27217
From at least Go 1,4 onward,
If we run a tiny benchmark (count++) and use StopTimer() for initialization (count = 0) then the benchmark runs for a very long time, perhaps forever.
If we comment out StopTimer() (//b.StopTimer()) then the benchmark quickly runs to completion.
runtime.ReadMemStats which is called by both StartTimer and StopTimer is known to be slow, though I'm not sure it's expected to be as slow as shown here.
Calling just StartTimer without StopTimer is a noop because b.timerOn is true.
The uses of StartTimer/StopTimer are usually around one time setup/teardown code that happens before or after the b.N loop.
I encountered the same while benchmarking fairly small functions. Via pprof, I could see that ReadMemStats (called via both StartTimer and StopTimer) was taking up ~90% of the CPU time, while the func I was benchmarking was only taking about 3% of CPU time.
Initially, I too thought that
And this is where it gets whacky. For example, on
So I presume this is what you're seeing. By default the benchtime is 1s, so it's reasonable to think that your tiny benchmark could be slowed down enough to run for a few minutes. I'd suggest trying to run it with
Note that if the goal is for the benchmark code to execute for 1s, and there's 97% overhead due to StopTimer/StartTimer, then the correct overall execution time is 33s. So maybe the b.N estimate is correct.
I've also thought for some time that we should set an absolute limit on b.N. To mangle Tolstoy: How many iterations does one benchmark need? If we capped b.N at something reasonably sized, like 100k, that'd mitigate these disaster scenarios. And also general speed up running microbenchmarks. (I suppose I should file a new issue for this?)
It can completely mess up the benchmark numbers for init functions that were too small. Moreover, it could make the -benchtime estimates be way off. For example, 'benchinit cmd/go' was taking over a minute to run the benchmark, instead of the expected ~1s. The benchtime estimate being off is likely the upstream issue golang/go#27217. The fact that StartTimer and StopTimer are expensive is being tracked in golang/go#20875.