Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing: add flag to specify size of b.N during benchmarks #26811

Closed
theckman opened this issue Aug 5, 2018 · 1 comment

Comments

Projects
None yet
2 participants
@theckman
Copy link
Contributor

commented Aug 5, 2018

This is a feature request for adding a flag to the go test command that allows you to specify the value of b.N during a benchmark. I've heard a few different types of asks for this functionality in the Go community, and thought it would be good to raise as an issue. To note, this isn't something that can be solved by the -count flag, as that controls how many times the BenchmarkXxx function is invoked and not how many iterations happen in the benchmark (b.N).

I'm in the process of helping do a technical review of a Go book for newbies, and the author wants to show the performance benefits of using concurrency for web requests over serializing them. Originally, the author wanted to teach readers how to write benchmarks first, and to then use these benchmarking skills to compare a concurrent vs serial implementation. The issue here is that because b.N can't be set to a constant value of 1, the author doesn't want to cause a DDoS against golang.org (and another site) from people running this benchmark. Because we're looking to measure the performance difference on the magnitude of milliseconds, running the benchmark once should give us an accurate-enough result to show the performance improvement.

While I'm less convinced of this use case, another ask I've seen in the Go community for this functionality is around using a constant value for your benchmarks so that you can compare the total wallclock time of the benchmarks. The reason I'm not convinced in this situation is that because most people do (and should) care about the ns/op result on a per-benchmark basis, not the total wallclock time it took the benchmark suite to run. I think they wanted to use this as a way to automatically detect large changes in performance, without needing to parse the results and keep track of each benchmark between runs.

@cespare

This comment has been minimized.

Copy link
Contributor

commented Aug 5, 2018

Closing as a duplicate of #24735. Please comment if you disagree.

@cespare cespare closed this Aug 5, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.