Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redis cluster support #373

Open
dimitarvdimitrov opened this issue Sep 4, 2023 · 3 comments
Open

Redis cluster support #373

dimitarvdimitrov opened this issue Sep 4, 2023 · 3 comments

Comments

@dimitarvdimitrov
Copy link
Contributor

Initially reported in a grafana/mimir issue grafana/mimir#5896

When using mget we need to provide keys that hash to the same slot. This is not a consideration that the existing client makes, so the larger the redis cluster is, the more likely it is that the multiple keys hash to different slots and the less likely it is that multi-key GETs work.

Redis keyslot distribution is documented in Key distribution model in this doc

Proposals

Some proposals to start the discussion

  • Do single key lookups; this has the obvious impact of making lookups slower
  • Using CRC16 find the cluster slots of each key. Then issue mget commands each unique slot and the keys that hash to it.
@56quarters
Copy link
Contributor

Redis support is experimental and unsupported. IMO it shouldn't have ever been added to dskit or Mimir. We don't run it and have no intention of running it. I propose not fixing this issue or spending anytime on Redis unless there is a customer requirement for it.

@zackman0010
Copy link

zackman0010 commented Sep 5, 2023

Redis support is experimental and unsupported. IMO it shouldn't have ever been added to dskit or Mimir. We don't run it and have no intention of running it. I propose not fixing this issue or spending anytime on Redis unless there is a customer requirement for it.

This comment seems unnecessarily hostile. Redis is a very widely used caching solution, just because you don't run it doesn't mean no one does. In our own case, we already have Redis installed. If we asked to install Memcache, the response would be "Why do you need another cache when we already have Redis?"

Furthermore, Redis support is also in both Tempo and Loki, and it's not even marked as experimental in Loki. My understanding of dskit is that it's intended to be a central library that consolidates the implementations of utilities into a single dependency, so that all of Grafana's components use the same implementation instead of their own separate implementation. If that is the case, the Redis Client easily matches that description.

@56quarters
Copy link
Contributor

Redis support is experimental and unsupported. IMO it shouldn't have ever been added to dskit or Mimir. We don't run it and have no intention of running it. I propose not fixing this issue or spending anytime on Redis unless there is a customer requirement for it.

This comment seems unnecessarily hostile. Redis is a very widely used caching solution, just because you don't run it doesn't mean no one does. In our own case, we already have Redis installed. If we asked to install Memcache, the response would be "Why do you need another cache when we already have Redis?"

That's the exact same logic I'm applying here. We don't run it internally, have no reason to start, and hence can't support it.

Furthermore, Redis support is also in both Tempo and Loki, and it's not even marked as experimental in Loki. My understanding of dskit is that it's intended to be a central library that consolidates the implementations of utilities into a single dependency, so that all of Grafana's components use the same implementation instead of their own separate implementation. If that is the case, the Redis Client easily matches that description.

Tempo and Loki are their own projects and can make their own decisions about what they want to support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants