Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/redis-ha] Can't connect to Redis from outside the Kubernetes cluster #14492

Closed
eladtamary opened this issue Jun 4, 2019 · 8 comments · Fixed by #15305
Closed

[stable/redis-ha] Can't connect to Redis from outside the Kubernetes cluster #14492

eladtamary opened this issue Jun 4, 2019 · 8 comments · Fixed by #15305
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@eladtamary
Copy link

Describe the bug
When Redis client starts, it calls one of the sentinels to get the IP of the elected master.
It gets the internal IP of the pod and sends requests to this master.
When the client runs outside the K8S cluster, it fails to connect to the master since the IP is only accessible from within the K8S cluster.

Version of Helm and Kubernetes:
Any

Which chart:
redis-ha

What happened:
Connection refused when trying to initiate Redis client

What you expected to happen:
Client should be able to connect.

How to reproduce it (as minimally and precisely as possible):

  1. Install redis-ha chart.
  2. Try to connect to this chart from outside the cluster using sentinel.
@stale
Copy link

stale bot commented Jul 4, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 4, 2019
@DandyDeveloper
Copy link
Collaborator

@eladtamary I don't think this is a bug as much as a feature. I'm currently thinking of how to best manage this in my environment.

By default, I think this chart was built to support internal comms.

The decision should be made to
a. Not change the chart and recommend a proxy for managing this.
b. Update the chart to deploy a proxy that the service communicates with instead.

@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 5, 2019
@stale
Copy link

stale bot commented Aug 4, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 4, 2019
@stale
Copy link

stale bot commented Aug 18, 2019

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed Aug 18, 2019
@AkashKrDutta
Copy link

AkashKrDutta commented Dec 12, 2019

One solution to this is to create a loadbalancer service that points to all the master and slave. This can be used for reading.
For writing try creating a custom script (that patches changes to kubernetes api) that moves a tag master from one pod to another based on reconfig script invoked by sentinels when there is a failover. By this a new label is attached to a new pod that becomes master. A service name redis-master that attaches itself to a pod having the floating label should always point to the master. Use this for writing.

@DandyDeveloper
Copy link
Collaborator

@AkashKrDutta This is no longer an issue as we have HAProxy as a loadbalancer.

@AkashKrDutta
Copy link

AkashKrDutta commented Dec 13, 2019

@DandyDeveloper Thanks for the info! Although the extra running of a deployment can be saved (the ha proxy) if we directly shift the labels using a PATCH request to the kuberntes api to just attach the required label. Will be testing both to see which is better!

@enkicoma
Copy link

@DandyDeveloper & @AkashKrDutta Hi guys,
may I ask to help me in my scenario too, I got the same issue where I can't successfully use Redis on EKS.
Could not connect to Redis at ******.eu-***.elb.amazonaws.com:0: Can't assign requested address
For some reasons Redis master is changing the IP continuously and fails even if I exposed It Externally.
I think I deployed it wrong or I don't understand the full picture...
helm install --name ***-redis --set master.service.type=LoadBalancer --set master.persistence.enabled=true --set master.persistence.size=20Gi --set master.statefulset.updateStrategy=RollingUpdate --set cluster.slaveCount=1 --set password=**** stable/redis

Any clue?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
4 participants