Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

BUG: rabbitmq-ha chart fails to deploy on K8S 1.9.4 due to ConfigMaps now being mounted RO #4166

Closed
tdcox opened this issue Mar 15, 2018 · 3 comments · Fixed by #4169
Closed

Comments

@tdcox
Copy link

tdcox commented Mar 15, 2018

BUG REPORT
Version of Helm and Kubernetes:
Helm v2.8.1 / K8S v1.9.4-gke.1

Which chart:
rabbitmq-ha

What happened:
First pod fails to start.

➜ k logs jx-staging-vrs-mq-0                 
sed: can't create temp file '/etc/rabbitmq/rabbitmq.confXXXXXX': Read-only file system

What you expected to happen:
/etc/rabbitmq/rabbitmq.conf is expected to mount with file permissions 0644, according to the yaml.

How to reproduce it (as minimally and precisely as possible):
helm install to any default K8S 1.9.4 cluster.

Anything else we need to know:
As of 1.9.4, ConfigMaps and Secrets are mounted RO. See the following for details:

kubernetes/kubernetes#58720

@svmaris
Copy link
Contributor

svmaris commented Mar 15, 2018

I've worked around this issue by using a busybox initContainer on the StatefulSet with a command to copy the files from the ConfigMap to an emptyDir volume. I'm not sure if this is the right way to go, but I'd be happy to submit a PR.

@brianwawok
Copy link

Think this is the same as #4261

@JWimsingues
Copy link

JWimsingues commented Mar 19, 2018

For the ones interested, the MR is on going by @svmaris, thanks to him! Here is the reference: #4169.

It worked for me by copying its changes. As mentioned @etiennetremel do not forget to do:

$ export ERLANGCOOKIE=$(kubectl get secrets -n <NAMESPACE> <HELM_RELEASE_NAME>-rabbitmq-ha -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)
$ helm upgrade --name <HELM_RELEASE_NAME> \
    --set rabbitmqErlangCookie=$ERLANGCOOKIE \
    stable/rabbitmq-ha

Otherwise you will get this error:
** Connection attempt from disallowed node 'rabbitmqcli61@rabbitmq-ha-rabbitmq-ha-0.rabbitmq-ha-rabbitmq-ha.default.svc.cluster.local' **

If you already have done a helm delete <release> --purge to try to reset the rabbitmq cluster, and you think you are locked or if you do not have access to your previous ERLANGCOOKIE value, a solution is to perform helm delete <release> --purge and then delete all the previous pvc regarding rabbitmq: kubectl delete pvc data-broker-rabbitmq-ha-0 data-broker-rabbitmq-ha-1 .... Of course the hidden goal is to release the volumes to get new ones. To do so you the "delete" policy has to be set on your pv. As a result, the helm install will work again as far as a new ERLANGCOOKIE will be generated and copy to the new volumes.

This solution is not appropriate if you are in production and if you have some data in these volumes (data you do not wanna lose). I think another solution would be to mount the PV in a pod, check the cookie on it and perform the upgrade as mentioned previously...

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants