Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce support for per-call memory allocator for caching #249

Merged
merged 4 commits into from
Jan 9, 2023

Conversation

56quarters
Copy link
Contributor

@56quarters 56quarters commented Jan 4, 2023

What this PR does:

This change adds the ability to pass a memory allocator to caching methods using one or more Option arguments. This allows the underlying client to use memory allocators better suited to their workloads than GC. This is only used by the Memcached client at the moment.

Which issue(s) this PR fixes:

See grafana/mimir#3772
See grafana/gomemcache#8

Signed-off-by: Nick Pillitteri nick.pillitteri@grafana.com

Checklist

  • Tests updated
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

This change adds the ability to pass a memory allocator to caching
methods via their context argument. This allows the underlying client
to use memory allocators better suited to their workloads than GC.
This is only used by the Memcached client at the moment.

See grafana/mimir#3772

Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
@56quarters
Copy link
Contributor Author

This approach currently uses a context.Context object to pass an Allocator instance to the various caching layers. I'm not sure if this is a hacky way of "passing optional parameters to functions". In my mind an allocator isn't really something you'd normally pass as an argument. Do people have thoughts or opinions on this?

@56quarters 56quarters marked this pull request as ready for review January 5, 2023 21:08
@56quarters 56quarters changed the title Introduce support for per-call cache options Introduce support for per-call memory allocator for caching Jan 5, 2023
@charleskorn
Copy link
Contributor

This approach currently uses a context.Context object to pass an Allocator instance to the various caching layers. I'm not sure if this is a hacky way of "passing optional parameters to functions". In my mind an allocator isn't really something you'd normally pass as an argument. Do people have thoughts or opinions on this?

My 2c: I would prefer an Options-style parameter, like what you introduced in gomemcache, although I agree that it feels a bit weird.

Rationale:

As a consumer of this library, an Options-style parameter makes the functionality more discoverable, whereas using a context.Context, I'd need to do a bit more digging to find it. (Documentation might help mitigate this though.)

Using parameters would also help avoid annoying bugs - for example, imagine a scenario where we had a couple of different libraries that all expected the allocator to be passed in the context.Context, each with their own WithAllocator(ctx, allocator) method. It'd be really easy to accidentally use the wrong WithAllocator method, and it wouldn't be immediately obvious that you'd done this: instead, we'd just silently use the default allocator. With parameters or an Options-style parameter, either you've set the allocator (and type safety prevents any confusion), or you haven't.

@56quarters
Copy link
Contributor Author

This approach currently uses a context.Context object to pass an Allocator instance to the various caching layers. I'm not sure if this is a hacky way of "passing optional parameters to functions". In my mind an allocator isn't really something you'd normally pass as an argument. Do people have thoughts or opinions on this?

My 2c: I would prefer an Options-style parameter, like what you introduced in gomemcache, although I agree that it feels a bit weird.

Rationale:

As a consumer of this library, an Options-style parameter makes the functionality more discoverable, whereas using a context.Context, I'd need to do a bit more digging to find it. (Documentation might help mitigate this though.)

Using parameters would also help avoid annoying bugs - for example, imagine a scenario where we had a couple of different libraries that all expected the allocator to be passed in the context.Context, each with their own WithAllocator(ctx, allocator) method. It'd be really easy to accidentally use the wrong WithAllocator method, and it wouldn't be immediately obvious that you'd done this: instead, we'd just silently use the default allocator. With parameters or an Options-style parameter, either you've set the allocator (and type safety prevents any confusion), or you haven't.

That's a great point: that would make the API much harder to use incorrectly. I'll work on that change tomorrow.

Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Copy link
Contributor

@charleskorn charleskorn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@pracucci pracucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job, LGTM! I just left a nit.

cache/memcached_client.go Show resolved Hide resolved
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
@56quarters 56quarters merged commit 7242706 into main Jan 9, 2023
@56quarters 56quarters deleted the 56quarters/add-options branch January 9, 2023 17:00
charleskorn pushed a commit that referenced this pull request Aug 3, 2023
server: Expose `http` and `grpc` listen addresses.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/caching enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants