Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
proposal: cmd/vet: flag benchmarks that don’t use b #38677
I propose that cmd/vet flag any benchmark that doesn’t have any reference to b in its body.
The implementation should be fairly straightforward. Pick out benchmarks. Pick out the identifier (usually
I've actually seen this twice in the past few months. Something else I tend to see is slow benchmarks that do a lot of heavy initialization work, but don't take care of resetting the timer.
I'm not sure how far vet can get here, though. Writing a decent benchmark is more than just using
Wouldn't there be a way for the testing package to catch this? For example, for any benchmark that can be run with N=1 quickly (which should be the vast majority of them), any run with N>100 should at least take 10x more time. If it does not, we are either benchmarking something else (like the initialization code, or nothing at all), or the benchmark doesn't use
It would be hard to cover very expensive benchmarks by default there, such as those that take one second per iteration, since the default is
True. I'd hope that being made aware of the problem and referred (by the vet error message) to some docs would set most people on a better path.
There's been scattered discussion between @rsc, @bcmills, @aclements, myself, and maybe others, about having the benchmark framework detect non-linearity. But that's a heavy lift, and you'll get false positives, particularly since you have to base the decision on a single run.
There's also been discussion of having the benchmark framework print all intermediate b.N runs and have benchstat detect non-linearity, bi-modal distributions, and other distribution problems. But then you have to know about benchstat, at which point, you probably aren't the target audience for this vet check.
I definitely wish we could somehow build benchstat into package testing. But the "before"/"after" problem is a thorny one. (See my failed machinations in #10930.)
@josharian your thoughts are very interesting - you've clearly been thinking about this for a while :)
I agree that, in general, writing good benchmarks in Go requires reading quite a lot of content and using multiple tools together carefully. Even though it's a difficult task, I still think that the only way to truly solve that problem is to make the tooling better. Be it by making the
To me, fuzzing and benchmarking are kind of similar, in the sense that they are very helpful and necessary for many developers, but they are difficult to use correctly simply because not enough hours have gone into properly including them as part of
Adding small checks to
In contrast, a
So I don't think there is a slippery slope here: the argument for
We do vet API usage, checking things like printf format strings.
That said, I am not convinced this specific check needs to be in vet.
As Josh noted, we've talked before about having a linearity check.
But a highly accurate linearity check is not needed to detect