Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing: reporting non-standard benchmark results #16110

Closed
benburkert opened this issue Jun 18, 2016 · 9 comments

Comments

Projects
None yet
7 participants
@benburkert
Copy link
Contributor

commented Jun 18, 2016

I would like to report the results of non-standard benchmark (non google/benchmark style) along side existing benchmarks, and do so following the proposed Go benchmark data format. For example, the pbench package reports percentiles in addition to the standard results.

I was unable to implement this benchmark without using reflection to access some unexported fields, and there was no mechanism for using the same io.Writer for the output. A preferable solution would be a new method on B that, provided a name & benchmark-result argument, formats and writes the results in the standard format.

Additionally, adding a ReportAllocs field to BenchmarkResults would be useful for when a benchmark is run with -benchmem but a non-standard benchmark does not support malloc statistics.

@josharian josharian added this to the Go1.8 milestone Jun 18, 2016

@josharian

This comment has been minimized.

Copy link
Contributor

commented Jun 18, 2016

Interesting. cc @aclements @mpvl for thoughts. Tentatively marking this Go 1.8.

@rsc

This comment has been minimized.

Copy link
Contributor

commented Oct 10, 2016

What is the API you are looking for?

@rsc rsc added the WaitingForInfo label Oct 10, 2016

@rsc

This comment has been minimized.

Copy link
Contributor

commented Oct 19, 2016

@benburkert

This comment has been minimized.

Copy link
Contributor Author

commented Oct 23, 2016

I think it would mostly consist of a (*testing.B).Report(testing.BenchmarkResult) method for reporting a sub benchmark. That plus an additional Name field on testing.BenchmarkResult should satisfy my needs:

https://play.golang.org/p/QavnfPoTUr

@aclements

This comment has been minimized.

Copy link
Member

commented Oct 23, 2016

Oh, this isn't at all what I thought you were proposing. I thought you wanted custom metrics (e.g., metrics other than the standard ns/op, allocs/op, etc). That certainly what I want. :)

Can you explain why sub-benchmarks don't already solve this?

@benburkert

This comment has been minimized.

Copy link
Contributor Author

commented Oct 24, 2016

Oh, this isn't at all what I thought you were proposing. I thought you wanted custom metrics (e.g., metrics other than the standard ns/op, allocs/op, etc). That certainly what I want. :)

That would be a nice ability as well. For my immediate need i'm using the sub-benchmark's name to specify the custom metric:

// BenchmarkThing-8         20000000        80.4 ns/op
// BenchmarkThing/P99.9-8   20000000        80.4 ns/op

But an API for specifying a custom unit would be preferable:

// BenchmarkThing-8         20000000        80.4 ns/op      80.4 P99.9-ns/op

Which could be done with an extra method on BenchmarkResult: https://play.golang.org/p/_ITU8Ta0RZ

Can you explain why sub-benchmarks don't already solve this?

I'm building on sub-benchmarks but using the reflection package to get at name and iteration values. And even with reflection there's no way to get at the io.Writer used for output.

@aclements

This comment has been minimized.

Copy link
Member

commented Oct 24, 2016

If I understand correctly, it seems like what you really want is a way of reporting custom metrics and doing this through sub-benchmarks/extra benchmark lines is just a hack to get around the lack of custom metrics. If so, we should add custom metrics, not add mechanism to support the hack.

@benburkert

This comment has been minimized.

Copy link
Contributor Author

commented Oct 24, 2016

If I understand correctly, it seems like what you really want is a way of reporting custom metrics and doing this through sub-benchmarks/extra benchmark lines is just a hack to get around the lack of custom metrics.

yes, that's what i'm wanting, sorry for the confusion.

@josharian

This comment has been minimized.

Copy link
Contributor

commented Jul 20, 2018

Closing this in favor of #26037, which covers the same ground but appears to be moving forward.

@josharian josharian closed this Jul 20, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.