Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

McRouter loses key on scale up #262

Closed
alexliffick opened this issue Jun 29, 2018 · 4 comments
Closed

McRouter loses key on scale up #262

alexliffick opened this issue Jun 29, 2018 · 4 comments

Comments

@alexliffick
Copy link

alexliffick commented Jun 29, 2018

I'm running a k8s cluster in GKE and used their walkthrough of putting together a McRouter setup with memcached. Initially we were using consul keystores but our cache is too large and causes consul to use too much memory, so we decided to test out memcache in it's place. I spin up the mcrouter daemonset and have a single pod of memcache and everything works just fine. This is when I added some test keys. They get and delete ok. The issue comes when I leave keys in place and scale.

I scale up the memcache statefulset and add the second server name to the configmap for mcrouter. Once I see the new server using stats servers I then run a get and one of the keys is no longer there. I've telneted to 11211 on the original memcache pod and run a get and can retrieve the same key just fine. The config provided in the configmap is below:

  {
    "pools": {
      "A": {
        "servers": [
          "memcached-0.memcached.default.svc.cluster.local:11211",
          "memcached-1.memcached.default.svc.cluster.local:11211"
        ]
      }
    },
    "route": "PoolRoute|A"
  }

I've also moved to using a statefulset for mcrouter to limit to only one pod, and also switched to using the official docker image rather than the one in the k8s example Helm Chart, and had no luck. No matter what I do, I keep getting a "not found" via the mcrouter get on at least one key after scaling while other keys are still found fine. Help?

@kkondaka
Copy link
Contributor

kkondaka commented Jul 9, 2018

This is the expected behavior. When the number of servers is increased, some keys are hashed to the newly added server but they do not exist on that server. So, "not found" is returned. You need to use "WarmupRoute" if you do not want keys to be found after scale up. For more information, see https://github.com/facebook/mcrouter/wiki/List-of-Route-Handles#warmuproute. Exampel config can be found at https://github.com/facebook/mcrouter/wiki/Cold-cache-warm-up-setup

@alexliffick
Copy link
Author

What we ultimately need is a single source of truth for each key. When a new box comes online we just want to start using it going forward rather than having existing keys be "migrated" to it in any way. We will be deleting cache keys for each site when rolling out new versions of them so there can't be any chance that any single server has a different version of that cache from another; they need to exist only once. My concern with the warm up is that it'll end up creating two different locations where that same cache can be found. What's the best way to accomplish what I'm after?

@orishu
Copy link

orishu commented Jul 10, 2018

@alexliffick , you wrote:

We will be deleting cache keys for each site when rolling out new versions

A possible approach is to have that version as a part of the memcache key rather than deleting cache keys.

@kelu27
Copy link

kelu27 commented Jul 19, 2018

Maybe you can prefix your key with your version and use the PrefixSelectorRoute: https://github.com/facebook/mcrouter/wiki/Prefix-routing-setup

@stuclar stuclar closed this as completed Feb 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants