Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: Compute benchmark statistics #34626

Open
voronaam opened this issue Sep 30, 2019 · 4 comments

Comments

@voronaam
Copy link

commented Sep 30, 2019

When a user supplied high testing.count parameter, they want to look at the statistics in addition to a long stream of numbers in the output.

PoC code available in #34479 along with sample output.

The main reason to do this inside go test is the prettyPrint output which causes loss of precision in any tool that computes the statistics based on the output.

The PoC computes mean and 95% Confidence Interval for any benchmark with the count at or higher than 5. I believe those are reasonable defaults and do not propose to make them configurable.

If this proposal is accepted I can complete the PoC code reasonably fast.

@gopherbot gopherbot added this to the Proposal milestone Sep 30, 2019
@gopherbot gopherbot added the Proposal label Sep 30, 2019
@mvdan

This comment has been minimized.

Copy link
Member

commented Oct 1, 2019

Have you looked at https://godoc.org/golang.org/x/perf/cmd/benchstat? Is there a reaeson why you need this to be part of go test directly?

@voronaam

This comment has been minimized.

Copy link
Author

commented Oct 1, 2019

The reason is outlined in the proposal.

The main reason to do this inside go test is the prettyPrint output which causes loss of precision in any tool that computes the statistics based on the output.

But if everybody is happy with this loss, I am fine with the proposal being rejected.

@mvdan

This comment has been minimized.

Copy link
Member

commented Oct 1, 2019

Sorry, I missed that bit from your text. Have you found real scenarios where the loss of precision is a real cause of problems? I've used benchmarks from the scale of nanoseconds to multiple seconds, and I've never really been bothered by the precision. Variance is the usual problem, at least that I've seen.

@voronaam

This comment has been minimized.

Copy link
Author

commented Oct 5, 2019

I do not think it was a cause of any problems. It is just I do not like loosing precision.

I should clarify that I am find keeping this as a local patch just for myself if general userbase is satisfied with the current state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.