Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Mongodb chart (pod has unbound immediate PersistentVolumeClaims) #12521

Closed
WStasW opened this issue Mar 26, 2019 · 9 comments
Closed

Mongodb chart (pod has unbound immediate PersistentVolumeClaims) #12521

WStasW opened this issue Mar 26, 2019 · 9 comments

Comments

@WStasW
Copy link

WStasW commented Mar 26, 2019

Is this a request for help?:
Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
2.13.0

Which chart:
Mongodb

What happened:
pod has unbound immediate PersistentVolumeClaims

What you expected to happen:
To create PersistanceVolumeClaim or at least see docs

How to reproduce it (as minimally and precisely as possible):
Deploy mongodb chart, it will tell u pod has unbound immediate PersistentVolumeClaims

Anything else we need to know:
No

@juan131
Copy link
Collaborator

juan131 commented Mar 26, 2019

Hi @WStasW

I was unable to reproduce the issue. These are the steps I followed:

$ helm repo update
...
$ helm search stable/mongodb
NAME                     	CHART VERSION	APP VERSION	DESCRIPTION
stable/mongodb           	5.14.1       	4.0.7      	NoSQL document-oriented database that stores JSON-like do...
$ helm install stable/mongodb --name mongodb
...
$ kubectl get pvc
NAME        STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongodb     Bound         pvc-191eb314-4fbf-11e9-a3e0-080027542934   8Gi        RWO            standard       3m8s
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-191eb314-4fbf-11e9-a3e0-080027542934   8Gi        RWO            Delete           Bound    default/mongodb     standard                3m44s
$ kubectl describe pod $(kubectl get pods -l app=mongodb -o jsonpath='{.items[0].metadata.name}')
...
    Mounts:
      /bitnami/mongodb from data (rw)
...
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb
    ReadOnly:   false

@WStasW
Copy link
Author

WStasW commented Mar 27, 2019

I tried your steps in the same order but it doesnt seem to work
kubectl get services returns this

NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
client-node-port   NodePort    10.105.177.180   <none>        4000:31516/TCP                  42h
mongodb            ClusterIP   10.109.118.188   <none>        27017/TCP                       2m20s
server-node-port   NodePort    10.108.20.153    <none>        1114:30001/TCP,9000:30002/TCP   42h
tiller-deploy      ClusterIP   10.106.251.85    <none>        44134/TCP                       13m

kubectl get pods

NAME                                 READY   STATUS    RESTARTS   AGE
client-deployment-7c89dc6455-jvw9p   1/1     Running   8          42h
mongodb-c45fd69c4-w87jx              0/1     Pending   0          2m9s
server-deployment-6d57cf6b79-jcgns   1/1     Running   33         42h

Should tiller deploy also be there as a pod?

my rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tiller-manager
  namespace: report-card-dev
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tiller-binding
  namespace: report-card-dev
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: report-card-dev
roleRef:
  kind: Role
  name: tiller-manager
  apiGroup: rbac.authorization.k8s.io

The steps i follow

  1. kubectl apply -f ./kbs/ -> that is where rbac file is located
  2. helm init --service-account tiller --tiller-namespace report-card-dev
  3. helm init –service-account tiller –history-max 200
  4. helm init –upgrade –service-account
  5. helm install stable/mongodb --name mongodb

UPDATE:

Apparently the issue was with `helm install --namespace HERE --name mongodb stable/mongodb

However there is another issue, do i need to configure the provisioning etc? I get the error

Failed to create provisioner: Provisioning in volume plugin "kubernetes.io/host-path" is disabled

All DB's including mysql face the same issue by some reason. I've used kind to create a cluster
How do i tackle that?

@WStasW
Copy link
Author

WStasW commented Mar 29, 2019

To anyone facing this issue. Apparently what you need to do is to create PersistanceVolume and StorageClass, plus you need to define storageClassName in a file called "values.yaml` provided. PersistanceVolume can look like this.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongodb-storage
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: mongodb-storage
  local:
    path: /db/mongo
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kube-dev-nri-control-plane

# https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/

and storageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mongodb-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

# https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/

Note: name has to match your mongodb storageClassName inside of yaml file with values for helm mongodb

@juan131
Copy link
Collaborator

juan131 commented Apr 2, 2019

All DB's including mysql face the same issue by some reason. I've used kind to create a cluster
How do i tackle that?

Does your cluster have a local volume provisioner? It seems your cluster cannot allocate volumes for your databases.

To anyone facing this issue. Apparently what you need to do is to create PersistanceVolume and StorageClass, plus you need to define storageClassName in a file called "values.yaml` provided. PersistanceVolume can look like this.

Creating the Persistent Volumes Claims should be done automatically by the chart. However, if you're using a Volume Provisioner that uses a specific StorageClass, you need to indicate that StorageClass when installing the chart.

@stale
Copy link

stale bot commented May 2, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 2, 2019
@dtzar
Copy link
Contributor

dtzar commented May 3, 2019

@WStasW Are you using the kubernetes provided by Docker for Desktop on Windows by chance? I'm able to reproduce your behavior on this configuration, but not on AKS. I've tried using persistentVolume.storageClass="" and persistentVolume.storageClass="hostpath" and both effectively have the same result as you describe:
Warning FailedScheduling 4m (x5 over 4m) default-scheduler pod has unbound PersistentVolumeClaims

However, we can see the PVC is showing bound to the statefulset:

Name:          datadir-mongo-mongodb-replicaset-0
Namespace:     default
StorageClass:  hostpath
Status:        Bound
Volume:        pvc-d9a2c0dd-6ded-11e9-a732-00155dd17021
Labels:        app=mongodb-replicaset
               release=mongo
Annotations:   control-plane.alpha.kubernetes.io/leader={"holderIdentity":"ce483207-6de2-11e9-b488-00155dd17020","leaseDurationSeconds":15,"acquireTime":"2019-05-03T21:53:26Z","renewTime":"2019-05-03T21:53:28Z","lea...
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
               volume.beta.kubernetes.io/storage-provisioner=docker.io/hostpath
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
Events:
  Type    Reason                 Age              From                                                            Message
  ----    ------                 ----             ----                                                            -------
  Normal  ExternalProvisioning   4m (x3 over 4m)  persistentvolume-controller                                     waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
  Normal  Provisioning           4m               docker.io/hostpath Davete ce483207-6de2-11e9-b488-00155dd17020  External provisioner is provisioning volume for claim "default/datadir-mongo-mongodb-replicaset-0"
  Normal  ProvisioningSucceeded  4m               docker.io/hostpath Davete ce483207-6de2-11e9-b488-00155dd17020  Successfully provisioned volume pvc-d9a2c0dd-6ded-11e9-a732-00155dd17021

@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 3, 2019
@juan131
Copy link
Collaborator

juan131 commented May 6, 2019

Hi @dtzar

What do you obtain when running the command below?

$ kubectl describe pv pvc-d9a2c0dd-6ded-11e9-a732-00155dd17021

Does it reports to be bound and claimed by your MongoDB pod?

Status:          Bound
Claim:           default/mongodb

@drcrook1
Copy link

drcrook1 commented May 6, 2019

I believe I'm having a similar issue. I was using my own .yaml files which failed to switch from pending to running for the issue due to unbound volume.

I've switched to using the helm chart and tested on a fresh cluster on docker-for-desktop (on windows). It starts off indicating unbound volume claim; then it says it set it up; but we end with an unhealthy pod and inability to connect.

helm install stable/mongodb --name mongodb
kubectl describe pod mongodb-5fd4c6786b-4msnx

Name:           mongodb-5fd4c6786b-4msnx
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Mon, 06 May 2019 09:24:53 -0400
Labels:         app=mongodb
                pod-template-hash=1980723426
                release=mongodb
Annotations:    <none>
Status:         Running
IP:             10.1.0.7
Controlled By:  ReplicaSet/mongodb-5fd4c6786b
Containers:
  mongodb:
    Container ID:   docker://715fed374fc39c64a5908ee0f17ab500055e25f8243f4b3b00bebb411ecfafc4
    Image:          docker.io/bitnami/mongodb:3.6.6-debian-9
    Image ID:       docker-pullable://bitnami/mongodb@sha256:110d3d4bfa71c8ebaabc4ee0a8fd19464b68fe239c5c4effcd41c585cc53665a
    Port:           27017/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 06 May 2019 09:25:07 -0400
    Ready:          False
    Restart Count:  0
    Liveness:       exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:      exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      MONGODB_ROOT_PASSWORD:  <set to the key 'mongodb-root-password' in secret 'mongodb'>  Optional: false
      MONGODB_USERNAME:
      MONGODB_DATABASE:
      MONGODB_EXTRA_FLAGS:
    Mounts:
      /bitnami/mongodb from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dxzww (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb
    ReadOnly:   false
  default-token-dxzww:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-dxzww
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                From                         Message
  ----     ------                 ----               ----                         -------
  Warning  FailedScheduling       41s (x2 over 41s)  default-scheduler            pod has unbound PersistentVolumeClaims
  Normal   Scheduled              40s                default-scheduler            Successfully assigned mongodb-5fd4c6786b-4msnx to docker-for-desktop
  Normal   SuccessfulMountVolume  40s                kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "pvc-549b5867-7002-11e9-8853-00155d014833"
  Normal   SuccessfulMountVolume  40s                kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-dxzww"
  Normal   Pulling                39s                kubelet, docker-for-desktop  pulling image "docker.io/bitnami/mongodb:3.6.6-debian-9"
  Normal   Pulled                 26s                kubelet, docker-for-desktop  Successfully pulled image "docker.io/bitnami/mongodb:3.6.6-debian-9"
  Normal   Created                26s                kubelet, docker-for-desktop  Created container
  Normal   Started                26s                kubelet, docker-for-desktop  Started container
  Warning  Unhealthy              12s                kubelet, docker-for-desktop  Readiness probe failed: MongoDB shell version v3.6.6
connecting to: mongodb://127.0.0.1:27017
2019-05-06T13:25:21.395+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2019-05-06T13:25:21.401+0000 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
  Warning  Unhealthy  2s  kubelet, docker-for-desktop  Readiness probe failed: MongoDB shell version v3.6.6
connecting to: mongodb://127.0.0.1:27017
2019-05-06T13:25:31.230+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2019-05-06T13:25:31.230+0000 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed

kubectl describe pv pvc-549b5867-7002-11e9-8853-00155d014833

Name:            pvc-549b5867-7002-11e9-8853-00155d014833
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by=docker.io/hostpath
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    hostpath
Status:          Bound
Claim:           default/mongodb
Reclaim Policy:  Delete
Access Modes:    RWO
Capacity:        8Gi
Node Affinity:   <none>
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /host_mnt/c/Users/DrCrook/.docker/Volumes/mongodb/pvc-549b5867-7002-11e9-8853-00155d014833
    HostPathType:
Events:            <none>

@WStasW WStasW closed this as completed May 7, 2019
@juan131
Copy link
Collaborator

juan131 commented May 7, 2019

Hi @drcrook1

I've switched to using the helm chart and tested on a fresh cluster on docker-for-desktop (on windows).

Are you able to reproduce the issue on a different K8s cluster (running on a Linux machine)? It might be related with the docker-for-desktop K8s implementation on Windows

As an alternative you can create a initContainer that allows you to modify the permissions on the persistent volume you're attaching to your MongoDB container. A user explained the process in the link below in the past:

https://github.com/bitnami/bitnami-docker-mongodb/issues/103

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants