-
-
Notifications
You must be signed in to change notification settings - Fork 660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow units and precision in benchmark #266
Conversation
A RecordValueWithPrecision method is added to the Benchmarker interface. The method behaves similarly to RecordValue, but allows the caller to additionally provide a measurement unit and float truncate precision. The stenographer is also updated to facilitate variable precision.
This is great! thanks! if you are up for the TimeWithPrecision work go for it - though I don't think it's strictly necessry |
i'm actually planning on deprecating the benchmark behavior as part of the v2 work currently in flight. my sense is it's not widely used, adds complexity the codebase, and doesn't need to be part of core ginkgo (probably better off as a separate library that is focused on benchmarking and grows more featureful with time.) With that said if you and/or others are using it extensively I'd love to learn more about the usecase/value it's bringing and how painful removing it would be. |
Since I am using ginkgo for all my tests the measure functionality integrates nicely where needed. it helps to be able to replace an It() with a Measure() here and there to be able to benchmark some sections without having to rewrite code or move it elsewhere. I do agree that turning it into a separate library would be an improvement though. |
any chance you could point me at a repo that has an example of your usage?
just want to learn more from real-life examples.
…On Tue, Mar 9, 2021 at 9:03 PM Joe ***@***.***> wrote:
Since I am using ginkgo for all my tests the measure functionality
integrates nicely where needed. it helps to be able to replace an It() with
a Measure() here and there to be able to benchmark some sections without
having to rewrite code or move it elsewhere. I do agree that turning it
into a separate library would be an improvement though.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#266 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEIJFXUO5MTYDJZCTNMZBTTC3OQNANCNFSM4CJMCENQ>
.
|
as i think about replacing the functionality - i'm leaning towards adding a new Gomega subpackage that specializes in measurements/benchmarking and is less tightly coupled to Ginkgo. that would give folks a relatively straightforward migration path and resolve my main concerns with it being in Ginkgo core. could you see that working for your usecase? |
The main project I am working on is private (unfinished) but I will get back to you this weekend with a gist link to some sample code. It("Should do the thing", func() {
Measure("the thing, func() {
// some code
}, iterations)
}) Whereas before my code would look like Measure("Doing the thing", func(b Benchmarker) {
b.Time("the thing", func() {
// some code
})
}, iterations) This could let you have matchers like |
Thanks @cobalt77 ! yes I'd imagine something along the lines of what you're describing along with a suite of associated matchers. A couple more questions as I think about the user experience:
|
My main use is to observe benchmarks just to get an insight into performance rather than asserting that things take under a particular amount of time, but I can see how that might be useful in some cases depending on the tests. The fast/slow/average outputs are great and can be very helpful in identifying which sections of code might be causing a bottleneck. For example a benchmark helped me identify a bug that was caused by writing into a buffered channel much faster than I could read from it. I had a RecordValue on len(channel) and the output showed a huge spike in the maximum value compared to the average which clued me in to the problem. So based on that I think having matchers that can check for outliers, max or min being unusually far away from the average, checking that measurements trend in a certain direction, things of that nature, would be quite helpful. I mostly use Measure, but RecordValue can come in handy. Another use case for me is using Measure in place of a for loop to get benchmark output for free. i.e. replacing It("should fill the container" func() {
for i := 0; i < 100; i++ {
container.Add(i)
}
Expect(container.Len()).To(Equal(100))
}) with i := 0
Measure("filling the container", func(b Benchmarker) {
b.Time("adding an item", func() {
container.Add(i)
})
i++
}, 100)
It("should contain 100 items", func() {
Expect(container.Len()).To(Equal(100))
}) Assuming container.Add() could potentially be expensive, this can produce useful output. |
I've also used Time() to check how long Eventually() takes to complete. Here's an actual code sample: b.Time("Handling provider add callbacks", func() {
testCh := make(chan struct{}, len(providers))
// OnProviderAdded registers a callback (non blocking)
l.OnProviderAdded(func(ctx context.Context, uuid string) {
testCh <- struct{}{}
})
Eventually(func() int {
return len(testCh)
}, 1*time.Second, 1*time.Millisecond).Should(Equal(len(providers)))
}) This would bench how long it took for OnProviderAdded to be called len(providers) times. |
Thanks for all this detail! You’ve given me some ideas for how to design a benchmarking module for gomega. I’ll write those ideas down then circle back to get some input on them :)
… On Mar 9, 2021, at 11:09 PM, Joe ***@***.***> wrote:
I've also used Time() to check how long Eventually() takes to complete. Here's an actual code sample:
b.Time("Handling provider add callbacks", func() {
testCh := make(chan struct{}, len(providers))
// OnProviderAdded registers a callback (non blocking)
l.OnProviderAdded(func(ctx context.Context, uuid string) {
testCh <- struct{}{}
})
Eventually(func() int {
return len(testCh)
}, 1*time.Second, 1*time.Millisecond).Should(Equal(len(providers)))
})
This would bench how long it took for OnProviderAdded to be called len(providers) times.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
hey @kralicky if you're interested I've put together a beta version of a new benchmarking package in Gomega. It's called gmeasure and is in the latest Gomega release. The package is documented but I'm working on additional documentation with richer examples for the onsi.github.io/gomega docsite. Let me know if you have any thoughts/feedback. It's a slightly different mental model but I think will provide a ton more flexibility going forward. |
I will definitely check it out and let you know! |
thanks! i'll let you know when the docs are finished too :) |
docs are up now at https://onsi.github.io/gomega/#gmeasure-benchmarking-code |
@onsi Tried out gmeasure a bit here, looks pretty good so far! One thing I would recommend would be a way to control the sample rate in the SamplingConfig. Looks like it just samples as fast as possible right now. My intuition was that if you specified both a sample count and a duration, it would perform N samples in the given duration, spacing them out evenly. |
hey @kralicky - super cool to see how you're using this! Looks great and please keep the feedback coming! I'll add a The sample count and duration are intended to be maximums so you can say stuff like "do this 10 times or for up to 1 minute - whichever comes first" |
Addresses issue #176
This change adds support for benchmark values of varying units by adding a RecordValueWithPrecision method.