dev-spaces interferes with Redis #32
Comments
Thanks for reporting this issue. Your analysis of the issue is correct and identifies a bug in the mindaro-proxy component. Do you have a Helm chart or raw Kubernetes yaml files you are using to install the specific redis cluster setup? This would help us greatly in recreating the problem on our side. Thanks! |
Hi @stepro thanks , I write a quick answer right before entering... a meeting :( here is the yaml to create the redis stateful set { and here the config map { |
Thanks, I'll take a look. |
Thanks @antogh for your patience on this issue. I needed to create a headless service object and fix a problem in the sentinel.sh script (the Unfortunately, the issue here is a general problem that occurs when injecting any kind of intercepting proxy such as for dev spaces or other solutions like istio. I believe the specific problem is that when an intended slave (e.g. The closest related issue I could find was this one for istio, where you'll notice the attached yaml files already disable the istio sidecar from the master and slave pods using a special istio annotation. The Helm chart did not generate these annotations so I'm not sure how it was determined that istio needed to be disabled for these pods. The actual issue here looks to be some problem with istio still getting in the way when it was told to get out of the way. For dev spaces, we do not currently have a mechanism for a pod to opt out of being instrumented for dev spaces with the sidecar proxy. Your best option would be to run the redis cache in a different Kubernetes namespace that has not been upgraded to a dev space. We will look into providing a label or annotation similar to istio that will allow you to opt out of the sidecar proxy for specific pods. |
Thanks @stepro I have tried some hacks to have the the redis pod to opt out from the mindaro-proxy, unfortunately kubernetes does not allow removing a container from a pod updating its yaml, so I tried changing the image name to a neutral "alpine" image, and it was working for some time (redis log shows a successful initialization), but then the aks agent notice the hash for the mindaro-proxy container has changed and restarts the whole pod causing an infinite crash back loop :( In the end I came to the same conclusion you suggested: placing redis pods into a different namespace not affected by dev spaces. Redis works fine again now. Bu unfortunately problem never ends. Now VS does not debug anymore with azure dev spaces. It worked fine the 1st time I tried, now doesn't work anymore. I removed completely redis and the new namespace but the problem persists. VS is able to create the SVC and DEPLOYMENT on the cluster but then fails (after 10 minutes of silence) to create the POD with the actual application that would be port forwarded to my local machine. It seems a communication problem. VS can create the container locally without problem, so it's not a local docker issue, it can't send the container image to the cluster into the pod. Do you have any idea what could be? The remote debugging inside kubernetes is really precious to speed up development, I'd really like to use this feature. BTW I opened another issue here about this problem. Thanks again |
I just discovered this article and will be investigating if there is anything we can do to make this scenario work. Thanks for opening the other issue - someone on the team familiar with these connectivity issues will be able to help you. |
@stepro However, after one day of pain, I’m very happy with the setup I have now, it works like a charm. Allow me to give you a suggestion: I would write a disclaimer in the dev spaces doc here: something like: |
This came out of this issue: Azure/dev-spaces#32
Thanks @antogh - I've submitted a request to get this added to our troubleshooting documentation. |
Please add an annotation to disable as soon as possible, that will be a great feature. |
We've checked in an ability to disable and it should be available in a couple of weeks. |
This should be fixed in the latest versions of Dev Spaces. Please let us know if you continue to see issues. |
Yesterday I installed dev-spaces for the first time on my AKS cluster (which I'm using for learning and experiments, not in production)
Right after the installation everything was fine, I could debug a containerized asp.net core app from my VS2017 directly on the kubernetes cluster. This asp.net core app reads a redis cache installed also in the cluster in the form of stateful set of 3 pods with 2 containers each (redis+sentinel) the 1st pod is a redis master, the other 2 are slaves. I have used this setup for 10 days and it was working fine.
Each evening I deallocate the VMs in the cluster and restart them the morning. Kubernetes takes care of restarting all the pods. It worked for 10 days before I installed dev-spaces.
This morning when I restarted the cluster VMs redis was not working. I had plenty of connections errors in the log. Master and slaves could not communicate anymore. I restarted the pods multiple time and even recreated the whole stateful set from scratch. Nothing to do.
I noticed that dev-spaces installed an additional container in the redis pods named mindaro-proxy and reading the logs I found this container was intercepeting and closing all the communications targeting the redis containers.
I then removed dev-spaces with az aks remove-dev-spaces command and recreated the redis stateful set + pods, this time they dont have the mindaro proxy in the pod and they work fine like before.
The debugging feature is great and saves me a lot of time but then I have this bad side effect. It would be great if this problem could be solved.
Thank you
The text was updated successfully, but these errors were encountered: