Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow result memory to be allocated from per-call Allocator #8

Merged
merged 7 commits into from Jan 4, 2023

Conversation

56quarters
Copy link

Remove the existing Client.Pool for pooling memory used by cache results and replace it with an optional per-call Allocator. The per-call allocator allows callers to determine the correct lifecycle for anything allocated.

Signed-off-by: Nick Pillitteri nick.pillitteri@grafana.com

@56quarters
Copy link
Author

Note that the Allocator interface here returns []byte as opposed to *[]byte. We use both in Mimir for our various byte pooling functionality. []byte means copying the slice header (24 bytes) when getting something from the pool but can be used without needing to be dereferenced. Anyone have an opinion on what we should do here?

@56quarters 56quarters marked this pull request as ready for review January 3, 2023 17:55
Remove the existing Client.Pool for pooling memory used by cache
results and replace it with an optional per-call Allocator. The
per-call allocator allows callers to determine the correct lifecycle
for anything allocated.

Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Copy link

@charleskorn charleskorn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM.

One thing that this might be missing: a test to confirm that the allocator set with WithAllocator() is actually used.

memcache/options.go Outdated Show resolved Hide resolved
@charleskorn
Copy link

Note that the Allocator interface here returns []byte as opposed to *[]byte. We use both in Mimir for our various byte pooling functionality. []byte means copying the slice header (24 bytes) when getting something from the pool but can be used without needing to be dereferenced. Anyone have an opinion on what we should do here?

I don't have a strong opinion... have you benchmarked both options to see if there's a significant difference?

Copy link

@pracucci pracucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The design LGTM! I also would love to see a benchmark comparison between []byte and *[]byte. Don't focus too much on Mimir store-gateway using []byte in their pool: we can change it if benefit is significant.

memcache/options.go Outdated Show resolved Hide resolved
memcache/memcache_test.go Outdated Show resolved Hide resolved
@@ -308,42 +308,55 @@ func BenchmarkScanGetResponseLine(b *testing.B) {
func BenchmarkParseGetResponse(b *testing.B) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May you show the benchmark comparison before / after, please?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May you show the benchmark comparison before / after, please?

Old pooling vs new pooling with []byte:

$ benchstat old.txt new.txt 
name                 old time/op    new time/op    delta
ParseGetResponse-16     251ns ± 0%     276ns ± 0%   ~     (p=1.000 n=1+1)

name                 old alloc/op   new alloc/op   delta
ParseGetResponse-16     88.0B ± 0%    112.0B ± 0%   ~     (p=1.000 n=1+1)

name                 old allocs/op  new allocs/op  delta
ParseGetResponse-16      2.00 ± 0%      3.00 ± 0%   ~     (p=1.000 n=1+1)

Old pooling vs new pooling with *[]byte

$ benchstat old.txt new.txt 
name                 old time/op    new time/op    delta
ParseGetResponse-16     251ns ± 0%     255ns ± 0%   ~     (p=1.000 n=1+1)

name                 old alloc/op   new alloc/op   delta
ParseGetResponse-16     88.0B ± 0%     88.0B ± 0%   ~     (all equal)

name                 old allocs/op  new allocs/op  delta
ParseGetResponse-16      2.00 ± 0%      2.00 ± 0%   ~     (all equal)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The delta reporting is misleading. There's actually a difference.

Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
@56quarters
Copy link
Author

Overall LGTM.

One thing that this might be missing: a test to confirm that the allocator set with WithAllocator() is actually used.

Well, it turns out the current unit tests depend on being able to run a local memcached. If there's no memcached binary available (which there isn't because CI runs in a standard ubuntu image), they're skipped. So none of the tests have actually been running: https://github.com/grafana/gomemcache/blob/master/memcache/memcache_test.go#L39

Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
it.Value = make([]byte, buffSize)
}
buff := opts.Alloc.Get(buffSize)
it.Value = *buff
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've elected to not force the length of the buffer like the previous code did. This is because the interface for Allocator guarantees that the returned buffer will be correctly sized.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the old BytesPool interface, it didn't guarantee length, only capacity. WDYT about taking that approach here? Seems a bit less error prone and moves some of the burden onto this code to assert the correct length vs making Allocator implementations do it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made this change. This also means that we don't have to do an extra 24b allocation in the test allocator (since we already do one in the client when using the pooled slice).

Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
@56quarters
Copy link
Author

The tests for the client need a lot of work so I'm going to merge this as-is and will follow up with a test to ensure the allocator is used and some other cleanup.

@56quarters 56quarters merged commit c1cca81 into master Jan 4, 2023
@56quarters 56quarters deleted the 56quarters/pooling branch January 4, 2023 17:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants