Mongodb chart (pod has unbound immediate PersistentVolumeClaims) #12521
Comments
Hi @WStasW I was unable to reproduce the issue. These are the steps I followed:
|
I tried your steps in the same order but it doesnt seem to work
kubectl get pods
Should tiller deploy also be there as a pod? my rbac.yaml
The steps i follow
Apparently the issue was with `helm install --namespace HERE --name mongodb stable/mongodb However there is another issue, do i need to configure the provisioning etc? I get the error
All DB's including mysql face the same issue by some reason. I've used |
To anyone facing this issue. Apparently what you need to do is to create PersistanceVolume and StorageClass, plus you need to define storageClassName in a file called "values.yaml` provided. PersistanceVolume can look like this.
and storageClass
Note: name has to match your mongodb storageClassName inside of yaml file with values for helm mongodb |
Does your cluster have a local volume provisioner? It seems your cluster cannot allocate volumes for your databases.
Creating the Persistent Volumes Claims should be done automatically by the chart. However, if you're using a Volume Provisioner that uses a specific StorageClass, you need to indicate that StorageClass when installing the chart. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
@WStasW Are you using the kubernetes provided by Docker for Desktop on Windows by chance? I'm able to reproduce your behavior on this configuration, but not on AKS. I've tried using However, we can see the PVC is showing bound to the statefulset: Name: datadir-mongo-mongodb-replicaset-0
Namespace: default
StorageClass: hostpath
Status: Bound
Volume: pvc-d9a2c0dd-6ded-11e9-a732-00155dd17021
Labels: app=mongodb-replicaset
release=mongo
Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"ce483207-6de2-11e9-b488-00155dd17020","leaseDurationSeconds":15,"acquireTime":"2019-05-03T21:53:26Z","renewTime":"2019-05-03T21:53:28Z","lea...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=docker.io/hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 4m (x3 over 4m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
Normal Provisioning 4m docker.io/hostpath Davete ce483207-6de2-11e9-b488-00155dd17020 External provisioner is provisioning volume for claim "default/datadir-mongo-mongodb-replicaset-0"
Normal ProvisioningSucceeded 4m docker.io/hostpath Davete ce483207-6de2-11e9-b488-00155dd17020 Successfully provisioned volume pvc-d9a2c0dd-6ded-11e9-a732-00155dd17021 |
Hi @dtzar What do you obtain when running the command below?
Does it reports to be bound and claimed by your MongoDB pod?
|
I believe I'm having a similar issue. I was using my own .yaml files which failed to switch from pending to running for the issue due to unbound volume. I've switched to using the helm chart and tested on a fresh cluster on docker-for-desktop (on windows). It starts off indicating unbound volume claim; then it says it set it up; but we end with an unhealthy pod and inability to connect. helm install stable/mongodb --name mongodb
kubectl describe pv pvc-549b5867-7002-11e9-8853-00155d014833
|
Hi @drcrook1
Are you able to reproduce the issue on a different K8s cluster (running on a Linux machine)? It might be related with the docker-for-desktop K8s implementation on Windows As an alternative you can create a initContainer that allows you to modify the permissions on the persistent volume you're attaching to your MongoDB container. A user explained the process in the link below in the past: https://github.com/bitnami/bitnami-docker-mongodb/issues/103 |
Is this a request for help?:
Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
2.13.0
Which chart:
Mongodb
What happened:
pod has unbound immediate PersistentVolumeClaims
What you expected to happen:
To create PersistanceVolumeClaim or at least see docs
How to reproduce it (as minimally and precisely as possible):
Deploy mongodb chart, it will tell u
pod has unbound immediate PersistentVolumeClaims
Anything else we need to know:
No
The text was updated successfully, but these errors were encountered: