New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow result memory to be allocated from per-call Allocator #8
Conversation
Note that the |
Remove the existing Client.Pool for pooling memory used by cache results and replace it with an optional per-call Allocator. The per-call allocator allows callers to determine the correct lifecycle for anything allocated. Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
e57dbb1
to
eb3f0d4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM.
One thing that this might be missing: a test to confirm that the allocator set with WithAllocator()
is actually used.
I don't have a strong opinion... have you benchmarked both options to see if there's a significant difference? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The design LGTM! I also would love to see a benchmark comparison between []byte
and *[]byte
. Don't focus too much on Mimir store-gateway using []byte
in their pool: we can change it if benefit is significant.
@@ -308,42 +308,55 @@ func BenchmarkScanGetResponseLine(b *testing.B) { | |||
func BenchmarkParseGetResponse(b *testing.B) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May you show the benchmark comparison before / after, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May you show the benchmark comparison before / after, please?
Old pooling vs new pooling with []byte
:
$ benchstat old.txt new.txt
name old time/op new time/op delta
ParseGetResponse-16 251ns ± 0% 276ns ± 0% ~ (p=1.000 n=1+1)
name old alloc/op new alloc/op delta
ParseGetResponse-16 88.0B ± 0% 112.0B ± 0% ~ (p=1.000 n=1+1)
name old allocs/op new allocs/op delta
ParseGetResponse-16 2.00 ± 0% 3.00 ± 0% ~ (p=1.000 n=1+1)
Old pooling vs new pooling with *[]byte
$ benchstat old.txt new.txt
name old time/op new time/op delta
ParseGetResponse-16 251ns ± 0% 255ns ± 0% ~ (p=1.000 n=1+1)
name old alloc/op new alloc/op delta
ParseGetResponse-16 88.0B ± 0% 88.0B ± 0% ~ (all equal)
name old allocs/op new allocs/op delta
ParseGetResponse-16 2.00 ± 0% 2.00 ± 0% ~ (all equal)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The delta reporting is misleading. There's actually a difference.
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Well, it turns out the current unit tests depend on being able to run a local |
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
memcache/memcache.go
Outdated
it.Value = make([]byte, buffSize) | ||
} | ||
buff := opts.Alloc.Get(buffSize) | ||
it.Value = *buff |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've elected to not force the length of the buffer like the previous code did. This is because the interface for Allocator
guarantees that the returned buffer will be correctly sized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the old BytesPool
interface, it didn't guarantee length, only capacity. WDYT about taking that approach here? Seems a bit less error prone and moves some of the burden onto this code to assert the correct length vs making Allocator
implementations do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made this change. This also means that we don't have to do an extra 24b allocation in the test allocator (since we already do one in the client when using the pooled slice).
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
The tests for the client need a lot of work so I'm going to merge this as-is and will follow up with a test to ensure the allocator is used and some other cleanup. |
Remove the existing Client.Pool for pooling memory used by cache results and replace it with an optional per-call Allocator. The per-call allocator allows callers to determine the correct lifecycle for anything allocated.
Signed-off-by: Nick Pillitteri nick.pillitteri@grafana.com