Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: cmd/vet: flag benchmarks that don’t use b #38677

Open
josharian opened this issue Apr 26, 2020 · 7 comments
Open

proposal: cmd/vet: flag benchmarks that don’t use b #38677

josharian opened this issue Apr 26, 2020 · 7 comments
Labels
Milestone

Comments

@josharian
Copy link
Contributor

@josharian josharian commented Apr 26, 2020

I propose that cmd/vet flag any benchmark that doesn’t have any reference to b in its body.

  • Frequency: Low among experienced gophers, but not an uncommon mistake among new gophers.
  • Correctness: Benchmarks written without using b do not function as benchmarks. Often this gets caught by a user who is puzzled by the output, but not always, particularly if the benchmark is slow.
  • Precision: 100%, or very near it. You cannot write a correct benchmark without using b.N, b.Run, b.RunParallel, or somehow passing b or one of its fields to another function to do that for you.

The implementation should be fairly straightforward. Pick out benchmarks. Pick out the identifier (usually b). Look for any uses of that identifier. If none, complain.

cc @robpike @mvdan

@gopherbot gopherbot added this to the Proposal milestone Apr 26, 2020
@gopherbot gopherbot added the Proposal label Apr 26, 2020
@mvdan
Copy link
Member

@mvdan mvdan commented Apr 26, 2020

I've actually seen this twice in the past few months. Something else I tend to see is slow benchmarks that do a lot of heavy initialization work, but don't take care of resetting the timer.

I'm not sure how far vet can get here, though. Writing a decent benchmark is more than just using b, and I worry that the vet check could give a false sense of security when their benchmark is "fixed".

Wouldn't there be a way for the testing package to catch this? For example, for any benchmark that can be run with N=1 quickly (which should be the vast majority of them), any run with N>100 should at least take 10x more time. If it does not, we are either benchmarking something else (like the initialization code, or nothing at all), or the benchmark doesn't use b properly.

It would be hard to cover very expensive benchmarks by default there, such as those that take one second per iteration, since the default is -benchtime=1s. But I think that's an okay tradeoff.

@josharian
Copy link
Contributor Author

@josharian josharian commented Apr 26, 2020

Writing a decent benchmark is more than just using b, and I worry that the vet check could give a false sense of security when their benchmark is "fixed".

True. I'd hope that being made aware of the problem and referred (by the vet error message) to some docs would set most people on a better path.

Wouldn't there be a way for the testing package to catch this?

There's been scattered discussion between @rsc, @bcmills, @aclements, myself, and maybe others, about having the benchmark framework detect non-linearity. But that's a heavy lift, and you'll get false positives, particularly since you have to base the decision on a single run.

There's also been discussion of having the benchmark framework print all intermediate b.N runs and have benchstat detect non-linearity, bi-modal distributions, and other distribution problems. But then you have to know about benchstat, at which point, you probably aren't the target audience for this vet check.

I definitely wish we could somehow build benchstat into package testing. But the "before"/"after" problem is a thorny one. (See my failed machinations in #10930.)

@robpike
Copy link
Contributor

@robpike robpike commented Apr 27, 2020

I'm not in favor of this. It seems a minor problem and opening vet up to analyzing people's tests leads us down a long and slippery path.

@mvdan
Copy link
Member

@mvdan mvdan commented Apr 27, 2020

@josharian your thoughts are very interesting - you've clearly been thinking about this for a while :)

I agree that, in general, writing good benchmarks in Go requires reading quite a lot of content and using multiple tools together carefully. Even though it's a difficult task, I still think that the only way to truly solve that problem is to make the tooling better. Be it by making the testing package smarter, or bundling benchstat into testing.

To me, fuzzing and benchmarking are kind of similar, in the sense that they are very helpful and necessary for many developers, but they are difficult to use correctly simply because not enough hours have gone into properly including them as part of go test.

Adding small checks to vet could possibly help in the near term, but we can only catch the most trivial mistakes there. So while this could meet the bar for inclusion in vet, I don't think it's the direction we want to be heading in.

@bcmills
Copy link
Member

@bcmills bcmills commented Apr 27, 2020

It seems a minor problem and opening vet up to analyzing people's tests leads us down a long and slippery path.

A Benchmark function that doesn't depend on b.N at all is guaranteed not to produce valid results.

In contrast, a Test function that ignores its *testing.T could potentially be valid: it might indicate a test whose failure mode is (say) an uncaught panic that terminates the test program with a non-zero status.

So I don't think there is a slippery slope here: the argument for Benchmark functions already does not apply to Test functions.

@gopherbot
Copy link

@gopherbot gopherbot commented Apr 29, 2020

Change https://golang.org/cl/230978 mentions this issue: testing: fail benchmarks that don't loop over b.N

@rsc
Copy link
Contributor

@rsc rsc commented Apr 29, 2020

We do vet API usage, checking things like printf format strings.
And we vet test code, checking things like t.Fatalf's format strings.
In general it seems OK to vet misuse of test APIs, same as other APIs,
provided the checks meet the same bar.

That said, I am not convinced this specific check needs to be in vet.
Checking "did it use b at all" would catch some but not all problems
and would be easily defeated. It doesn't seem specific enough to me.
And I think the mistake can be detected more precisely at runtime instead.

As Josh noted, we've talked before about having a linearity check.
That can be a bit sensitive to timings but might be worth doing anyway.
Whether to do that is maybe borderline pending experiments
(that none of us have ever made time for).

But a highly accurate linearity check is not needed to detect
"this line (plotting t against b.N) has slope 0 instead of slope 1".
I sent CL 230978.
With that change, a vet check doesn't seem worthwhile.
(The only benefit would be that the vet check would catch
the bug in a benchmark that is never actually run, but if you
never run your benchmark, do you really care if it has a bug?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
6 participants
You can’t perform that action at this time.