Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create HA Redis deployment as an option #41

Closed
joeholley opened this issue Nov 29, 2018 · 6 comments
Closed

Create HA Redis deployment as an option #41

joeholley opened this issue Nov 29, 2018 · 6 comments
Labels
enhancement New feature or request help wanted Extra attention is needed
Milestone

Comments

@joeholley
Copy link
Collaborator

Basic work here would be something like:

  • Put together the k8s resources necesary to stand up a HA redis deployment (HA proxy + multiple redis instances set up for replication, something like the 'alternative solution' in this github repo readme)
  • Test that this works correctly with OM against the 0.2.0 release. Validate that failover works.
  • If possible see if there's a way to configure this such that k8s deployments recover downed instances and have them automatically added to the HA configuration without operator intervention
  • If this all works, rename the k8s service that redis lives behind from redis-sentinel to just redis and update the OM codebase to match this change (so future users won't be confused by the naming)
  • finalize a k8s resource file that is a 'drop-in' replacement for the existing deployments/k8s/redis-deployment.json/redis-service.json and install/yaml/01-redis.yaml that stands up a HA redis deployment that users can elect to use if they need it.
@joeholley joeholley added enhancement New feature or request help wanted Extra attention is needed labels Nov 29, 2018
@joeholley
Copy link
Collaborator Author

Probably also want to evaluate the Redis Operator and anything else that looks promising.

@ihrankouski
Copy link
Contributor

Some thoughts.
Having only HAProxy and replicated Redis instances assumes manual interventions when Redis master becomes unreachable to clients or to other Redis instances. So that's not real HA, HAProxy only can do the load balancing. At the moment there are two ways to achive HA:

  1. Set up Redis Cluster - several sharded masters, each replicated to slaves (so quite a lot of boxes to configure and run); clients need to support it.
  2. Run at least 3 Redis Sentinel boxes that are monitoring several replicated Redis instances: when current master fails they promote some slave to master; clients need to connect to Sentinels to obtain the address of current master (according to https://redis.io/topics/sentinel-clients). Some people however prefer to have HAProxy pointing right to Redis instances and figuring out who of them is master - then clients don't need to support Sentinel.

More information:

Regarding the Operators: there are several implementation, https://github.com/spotahome/redis-operator seems to be most starred. It allows to easily deploy Redis replicas with Sentinels saving us from having to configure everything manually. The downside is that clients have to support Sentinel, and because of autoconfiguration it may be not so easy to workaround that with HAProxy.

Also there's Helm chart with similar setup (3 Redises + 3 Sentinels).

@joeholley
Copy link
Collaborator Author

joeholley commented Dec 6, 2018

Yeah, I didn't do a good job of explaining, sorry :( I should have said "HA proxy + multiple redis instances set up for sentinel" or the like. We'll need the redis-sentinel for the automatic failover and resiliency, and the HA proxy (or a k8s resource that can do something similar) so that the clients don't have to be 'redis-sentinel' clients, but just regular redis clients to keep this a 'drop-in' replacement. Something like this: https://karlstoney.com/2015/07/23/redis-sentinel-behind-haproxy/

If there's a Redis operator that can get us along the path, that's probably what we want to use.

@ihrankouski
Copy link
Contributor

ihrankouski commented Dec 7, 2018

Redis Operator definitely simplifies Redis management in k8s. If we decide to go with it then I see several ways to still live with 'non-sentinel' client.

  1. redis-sentinel-proxy
    +  always proxies incomming connections to relevant master address (obtained from Sentinel)
    +  easy to deploy and configure - just point it to the Sentinel service (that is managed by RedisFailover)
    -  seems to be non-zero copy proxy, not sure about its stability & performance (though it works in general)
    -  requires additional Dockerfile + k8s Deployment + k8s Service

  2. Have HAProxy configured as here
    +  that's HAProxy
    -  it's not apparent how to pass Redis pods' IPs to configure HAProxy properly and keep it in sync (considering the lifecycle of Pod)
    -  an alternative to IPs in ^^^ is to have a service-per-redis-pod which looks a bit ugly to me
    -  theoretically in case of split-brain multiple Redis pods may be claiming that they are masters - not sure how HAProxy will handle the connections then

  3. Fork Redis Operator to add the ability to label master Redis pod (from StatefulSet) as "Current Master"
    +  if implemented, it would require only one more k8s resource to be managed: a Service pointing to the Redis master pod
    -  additional time to implement that functionality
    -  ...and get the PR merged in original repo: same feature was already proposed but sort of declined by authors (however hopefully they won't mind adding only the labeling)
    -  in theory there may occur some issues related to the sync of pods' labels with actual state of Sentinels' cluster (delays? two pods labeled as master during some short periods of time?), we need somebody to think about this more.

  4. It's possible to have HAProxy always (well, most of time) pointing to the actual Redis master pod IP (obtained from Sentinel):
    0) enabled HAproxy Runtime API
    1) lookup master address like this:
    echo SENTINEL get-master-addr-by-name mymaster | nc -q 5 <RedisFailover-Sentinel-service-name> 26379
    2) parse the output to extract IP
    3) write new config with that IP address to HAproxy unix socket like the following:
    echo "set server bk_redis/redis-master-serv addr <IP> port <PORT>" | sudo socat stdio /var/run/haproxy.sock
    4) repeat 1..3 every second?
    Changing HAproxy backend server address on fly works well. Still exploring this option.

@joeholley
Copy link
Collaborator Author

I like the first option, provided it has reasonable performance characteristics. Thanks for all the research!

@joeholley
Copy link
Collaborator Author

PR #48

@Laremere Laremere added this to the v0.3.0 milestone Apr 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants