Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

document how to use PV in minikube #7828

Open
liranmauda opened this issue Apr 21, 2020 · 32 comments
Open

document how to use PV in minikube #7828

liranmauda opened this issue Apr 21, 2020 · 32 comments
Labels
addon/storage-provisioner Issues relating to storage provisioner addon help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@liranmauda
Copy link

When I am trying to deploy mongodb on minikube v1.9.2 it fails with:
pod has unbound immediate PersistentVolumeClaims

$ minikube start
😄  minikube v1.9.2 on Darwin 10.14.6
    ▪ KUBECONFIG=github/noobaa-operator/kubeconfig
✨  Using the hyperkit driver based on user configuration
👍  Starting control plane node m01 in cluster minikube
🔥  Creating hyperkit VM (CPUs=6, Memory=3000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
🌟  Enabling addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
$ minikube addons list
|-----------------------------|----------|--------------|
|         ADDON NAME          | PROFILE  |    STATUS    |
|-----------------------------|----------|--------------|
| dashboard                   | minikube | disabled     |
| default-storageclass        | minikube | enabled ✅   |
| efk                         | minikube | disabled     |
| freshpod                    | minikube | disabled     |
| gvisor                      | minikube | disabled     |
| helm-tiller                 | minikube | disabled     |
| ingress                     | minikube | disabled     |
| ingress-dns                 | minikube | disabled     |
| istio                       | minikube | disabled     |
| istio-provisioner           | minikube | disabled     |
| logviewer                   | minikube | disabled     |
| metrics-server              | minikube | disabled     |
| nvidia-driver-installer     | minikube | disabled     |
| nvidia-gpu-device-plugin    | minikube | disabled     |
| registry                    | minikube | disabled     |
| registry-aliases            | minikube | disabled     |
| registry-creds              | minikube | disabled     |
| storage-provisioner         | minikube | enabled ✅   |
| storage-provisioner-gluster | minikube | disabled     |
|-----------------------------|----------|--------------|
$ minikube config view
- cpus: 6
- memory: 3000
- vm-driver: hyperkit
$ kubectl get pv,pvc
NAME                                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/db-noobaa-db-0   Pending                                      standard       15m
$ kubectl describe pv,pvc
Name:          db-noobaa-db-0
Namespace:     test
StorageClass:  standard
Status:        Pending
Volume:
Labels:        app=noobaa
               noobaa-db=noobaa
Annotations:   volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    noobaa-db-0
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  ExternalProvisioning  91s (x62 over 16m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
$ kubectl describe pod/noobaa-db-0
Name:           noobaa-db-0
Namespace:      test
Priority:       0
Node:           <none>
Labels:         app=noobaa
                controller-revision-hash=noobaa-db-8485b48f4d
                noobaa-db=noobaa
                statefulset.kubernetes.io/pod-name=noobaa-db-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/noobaa-db
Init Containers:
  init:
    Image:      noobaa/noobaa-core:5.3.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /noobaa_init_files/noobaa_init.sh
      init_mongo
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:        500m
      memory:     500Mi
    Environment:  <none>
    Mounts:
      /mongo_data from db (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-vjwpw (ro)
Containers:
  db:
    Image:      centos/mongodb-36-centos7
    Port:       <none>
    Host Port:  <none>
    Command:
      bash
      -c
      /opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
    Limits:
      cpu:     100m
      memory:  500M
    Requests:
      cpu:        100m
      memory:     500M
    Environment:  <none>
    Mounts:
      /data from db (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-vjwpw (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  db:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  db-noobaa-db-0
    ReadOnly:   false
  noobaa-token-vjwpw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  noobaa-token-vjwpw
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  7s (x15 over 17m)  default-scheduler  running "VolumeBinding" filter plugin for pod "noobaa-db-0": pod has unbound immediate PersistentVolumeClaims
$ kubectl describe pod/noobaa-db-0
Name:           noobaa-db-0
Namespace:      test
Priority:       0
Node:           <none>
Labels:         app=noobaa
                controller-revision-hash=noobaa-db-8485b48f4d
                noobaa-db=noobaa
                statefulset.kubernetes.io/pod-name=noobaa-db-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/noobaa-db
Init Containers:
  init:
    Image:      noobaa/noobaa-core:5.3.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /noobaa_init_files/noobaa_init.sh
      init_mongo
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:        500m
      memory:     500Mi
    Environment:  <none>
    Mounts:
      /mongo_data from db (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-vjwpw (ro)
Containers:
  db:
    Image:      centos/mongodb-36-centos7
    Port:       <none>
    Host Port:  <none>
    Command:
      bash
      -c
      /opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
    Limits:
      cpu:     100m
      memory:  500M
    Requests:
      cpu:        100m
      memory:     500M
    Environment:  <none>
    Mounts:
      /data from db (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-vjwpw (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  db:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  db-noobaa-db-0
    ReadOnly:   false
  noobaa-token-vjwpw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  noobaa-token-vjwpw
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  7s (x15 over 17m)  default-scheduler  running "VolumeBinding" filter plugin for pod "noobaa-db-0": pod has unbound immediate PersistentVolumeClaims
MacBook-Pro:noobaa-operator liranmauda$ kubectl get all
NAME                                   READY   STATUS    RESTARTS   AGE
pod/noobaa-core-0                      1/1     Running   2          18m
pod/noobaa-db-0                        0/1     Pending   0          18m
pod/noobaa-operator-676b7b4979-6dzsw   1/1     Running   0          18m

NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                    AGE
service/noobaa-db     ClusterIP      10.102.213.137   <none>        27017/TCP                                                  18m
service/noobaa-mgmt   LoadBalancer   10.106.57.165    <pending>     80:30879/TCP,443:31429/TCP,8445:32204/TCP,8446:32546/TCP   18m
service/s3            LoadBalancer   10.103.141.147   <pending>     80:32217/TCP,443:31257/TCP,8444:31003/TCP                  18m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/noobaa-operator   1/1     1            1           18m

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/noobaa-operator-676b7b4979   1         1         1       18m

NAME                           READY   AGE
statefulset.apps/noobaa-core   1/1     18m
statefulset.apps/noobaa-db     0/1     18m
MacBook-Pro:noobaa-operator liranmauda$ kubectl describe statefulset.apps/noobaa-db
Name:               noobaa-db
Namespace:          test
CreationTimestamp:  Tue, 21 Apr 2020 17:35:38 +0300
Selector:           noobaa-db=noobaa
Labels:             app=noobaa
Annotations:        <none>
Replicas:           1 desired | 1 total
Update Strategy:    RollingUpdate
Pods Status:        0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=noobaa
                    noobaa-db=noobaa
  Service Account:  noobaa
  Init Containers:
   init:
    Image:      noobaa/noobaa-core:5.3.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /noobaa_init_files/noobaa_init.sh
      init_mongo
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:        500m
      memory:     500Mi
    Environment:  <none>
    Mounts:
      /mongo_data from db (rw)
  Containers:
   db:
    Image:      centos/mongodb-36-centos7
    Port:       <none>
    Host Port:  <none>
    Command:
      bash
      -c
      /opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
    Limits:
      cpu:     100m
      memory:  500M
    Requests:
      cpu:        100m
      memory:     500M
    Environment:  <none>
    Mounts:
      /data from db (rw)
  Volumes:  <none>
Volume Claims:
  Name:          db
  StorageClass:
  Labels:        app=noobaa
  Annotations:   <none>
  Capacity:      50Gi
  Access Modes:  [ReadWriteMany]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  18m   statefulset-controller  create Claim db-noobaa-db-0 Pod noobaa-db-0 in StatefulSet noobaa-db success
  Normal  SuccessfulCreate  18m   statefulset-controller  create Pod noobaa-db-0 in StatefulSet noobaa-db successful
@liranmauda
Copy link
Author

I am using MacOs (Darwin 10.14.6) and hyperkit.

on v1.8.2 it doesn't happen (i downgraded the minikube and it passed, then upgrade it again, and it failed again).

It happened on several Mac machines.

maybe related to #3869

@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Apr 21, 2020
@priyawadhwa
Copy link

Hey @liranmauda thanks for opening this issue, it looks like it could be a bug with the storage provisioner.

Would you be able to provide the k8s files you applied to the cluster so that I could reproduce this issue?

@liranmauda
Copy link
Author

Hi @priyawadhwa
here is the statefulset yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: noobaa-db
  labels:
    app: noobaa
spec:
  replicas: 1
  selector:
    matchLabels:
      noobaa-db: noobaa
  serviceName: noobaa-db
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: noobaa
        noobaa-db: noobaa
    spec:
      serviceAccountName: noobaa
      initContainers:
        #----------------#
        # INIT CONTAINER #
        #----------------#
        - name: init
          image: NOOBAA_CORE_IMAGE
          command:
            - /noobaa_init_files/noobaa_init.sh
            - init_mongo
          resources:
            requests:
              cpu: "500m"
              memory: "500Mi"
            limits:
              cpu: "500m"
              memory: "500Mi"
          volumeMounts:
            - name: db
              mountPath: /mongo_data
      containers:
        #--------------------#
        # DATABASE CONTAINER #
        #--------------------#
        - name: db
          image: NOOBAA_DB_IMAGE
          command:
            - bash
            - -c
            - /opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
          resources:
            requests:
              cpu: "2"
              memory: "4Gi"
            limits:
              cpu: "2"
              memory: "4Gi"
          volumeMounts:
            - name: db
              mountPath: /data
  volumeClaimTemplates:
    - metadata:
        name: db
        labels:
          app: noobaa
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi

Tell me If you need anything more.

@flolu
Copy link

flolu commented Apr 28, 2020

I am running into similar issues. Any updates?

@yoavcloud
Copy link

Running into the same issue trying to use: https://github.com/helm/charts/tree/master/stable/mongodb-replicaset

minikube version
minikube version: v1.5.1
commit: 4df684c

kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Help would be appreciated, thank you.

@flolu
Copy link

flolu commented Apr 28, 2020

I don't know if this helps but I was able to fix the problem by changing my persistent volumes. I am now using something like this:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-database-pvc
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-database-pv
spec:
  storageClassName: local-storage
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data/my-database"
    type: DirectoryOrCreate

and then you can use the volume in your deployment like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-database-deployment
spec:
  selector:
    matchLabels:
      app: my-database
  replicas: 1
  template:
    metadata:
      labels:
        app: my-database
    spec:
      containers:
        - name: my-database
          image: my-database-image:latest
          volumeMounts:
            - name: persistent-db-storage
              mountPath: /my-database/mount-path
      volumes:
        - name: persistent-db-storage
          persistentVolumeClaim:
            claimName: my-database-pvc

@yoavcloud
Copy link

Update: after about 20 minutes or so, the issue resolved itself.

@flolu
Copy link

flolu commented Apr 28, 2020

@yoavcloud ok great :)

@tstromberg
Copy link
Contributor

If someone runs into this, could they please provide the output of minikube logs? It would be helpful to see the storage provisioner state. Thanks!

@tstromberg tstromberg added addon/storage-provisioner Issues relating to storage provisioner addon triage/needs-information Indicates an issue needs more information in order to work on it. labels May 1, 2020
@posuch
Copy link

posuch commented May 3, 2020

funny thing that the same deployment works on real k8s cluster

logs.txt

@ghost
Copy link

ghost commented May 12, 2020

@tstromberg output from minikube logs shown below:

==> storage-provisioner [0cc0c93bc9b8] <==
E0512 05:02:36.173614       1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: v1.PersistentVolumeClaimList: Items: []v1.PersistentVolumeClaim: v1.PersistentVolumeClaim: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 1010 ...:{},"k:{\"... at {"kind":"PersistentVolumeClaimList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/persistentvolumeclaims","resourceVersion":"46363"},"items":[{"metadata":{"name":"keycloak-postgresql-claim","namespace":"keycloak","selfLink":"/api/v1/namespaces/keycloak/persistentvolumeclaims/keycloak-postgresql-claim","uid":"faa87fe5-b2e5-4b39-b4e7-2eafe3b9ce63","resourceVersion":"46363","creationTimestamp":"2020-05-12T04:46:34Z","labels":{"app":"keycloak"},"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"k8s.io/minikube-hostpath"},"ownerReferences":[{"apiVersion":"keycloak.org/v1alpha1","kind":"Keycloak","name":"example-keycloak","uid":"a1899913-3ca1-45a0-86f3-e0c721ceea06","controller":true,"blockOwnerDeletion":true}],"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"keycloak-operator","operation":"Update","apiVersion":"v1","time":"2020-05-12T04:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1899913-3ca1-45a0-86f3-e0c721ceea06\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}},"f:status":{"f:phase":{}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-05-12T04:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{}}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard","volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}

the line above is repeated 100s of times.

Additionally, the following line exists at the tail end of the logs:

==> storage-provisioner [5b046032cebd] <==
F0512 02:44:13.940167       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

@ghost
Copy link

ghost commented May 12, 2020

This is caused by the addition of managedFields in v1.18.0 beta 2 [1]. More details about the issue can be found in kubernetes/kubernetes#89080. I've found that minikube config set kubernetes-version v1.16.0 is a work around for now. Or, creating the pv before the creating the depenent resource also works.

In the first log excerpt that I pasted above, you can see the managedFields cannot be parsed by r2d4's fork of external-storage. But, looking at the source here, it seems like the changes have already been updated.

@tstromberg based on your comment in #3628, does gcr.io/k8s-minikube/storage-provisioner still need to be updated?

[1] https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/

@posuch
Copy link

posuch commented May 12, 2020

tried to use workaround ( minikube config set kubernetes-version v1.16.0 ) or create PV first - neither did help on archlinux / minikube 1.9.2 - will try 1.10 laters

@gordillo-ramon
Copy link

gordillo-ramon commented May 23, 2020

Hi.

I am facing a similar issue in other helm charts. It seems it is something related with setting up the finalizers in the PVC configuration. Using finalizers: {} creates the pvc correctly and adds the default finalizer later:

  finalizers:
  - kubernetes.io/pvc-protection

Hope it can help to identify the issue.

@jlindholm
Copy link

@tstromberg output from minikube logs shown below:

==> storage-provisioner [0cc0c93bc9b8] <==
E0512 05:02:36.173614       1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: v1.PersistentVolumeClaimList: Items: []v1.PersistentVolumeClaim: v1.PersistentVolumeClaim: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 1010 ...:{},"k:{\"... at {"kind":"PersistentVolumeClaimList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/persistentvolumeclaims","resourceVersion":"46363"},"items":[{"metadata":{"name":"keycloak-postgresql-claim","namespace":"keycloak","selfLink":"/api/v1/namespaces/keycloak/persistentvolumeclaims/keycloak-postgresql-claim","uid":"faa87fe5-b2e5-4b39-b4e7-2eafe3b9ce63","resourceVersion":"46363","creationTimestamp":"2020-05-12T04:46:34Z","labels":{"app":"keycloak"},"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"k8s.io/minikube-hostpath"},"ownerReferences":[{"apiVersion":"keycloak.org/v1alpha1","kind":"Keycloak","name":"example-keycloak","uid":"a1899913-3ca1-45a0-86f3-e0c721ceea06","controller":true,"blockOwnerDeletion":true}],"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"keycloak-operator","operation":"Update","apiVersion":"v1","time":"2020-05-12T04:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1899913-3ca1-45a0-86f3-e0c721ceea06\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}},"f:status":{"f:phase":{}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-05-12T04:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{}}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard","volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}

the line above is repeated 100s of times.

Additionally, the following line exists at the tail end of the logs:

==> storage-provisioner [5b046032cebd] <==
F0512 02:44:13.940167       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

I have the same in my logs. Is there any update or workaround to this ?

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels May 28, 2020
@dpolivaev
Copy link

dpolivaev commented Jun 11, 2020

I have learned that minikube after version 1.8.2 uses a docker driver instead of virtual machine driver by default. And that there saving and restoring of data is not implemented for the related directories yet.

So if you start minikube with explicit driver selection as minikube start --driver=virtualbox it could help also in this case. Please check it and post the results here.

(See #8458)

@liranmauda
Copy link
Author

I am using hyperkit, and it is not working:

cat ~/.minikube/config/config.json
{
    "cpus": 6,
    "memory": 3000,
    "driver": "hyperkit"
}
$ minikube version
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

I am still getting:
for the pod:

$kubectl describe pod/noobaa-db-0
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  18s (x17 over 20m)  default-scheduler  running "VolumeBinding" filter plugin for pod "noobaa-db-0": pod has unbound immediate PersistentVolumeClaims

from the pvc:

$kubectl describe pvc/db-noobaa-db-0
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  ExternalProvisioning  85s (x82 over 21m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator

@khusseini
Copy link

khusseini commented Jun 22, 2020

I get the same with creating an elasticsearch cluster using the elasticsearch operator
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html

kubectl version
Client Version: version.Info
```json
{
   "Major":"1",
   "Minor":"18",
   "GitVersion":"v1.18.4",
   "GitCommit":"c96aede7b5205121079932896c4ad89bb93260af",
   "GitTreeState":"clean",
   "BuildDate":"2020-06-17T11:41:22Z",
   "GoVersion":"go1.13.9",
   "Compiler":"gc",
   "Platform":"linux/amd64"
}

Server Version: version.Info

{
   "Major":"1",
   "Minor":"18",
   "GitVersion":"v1.18.3",
   "GitCommit":"2e7996e3e2712684bc73f0dec0200d64eec7fe40",
   "GitTreeState":"clean",
   "BuildDate":"2020-05-20T12:43:34Z",
   "GoVersion":"go1.13.9",
   "Compiler":"gc",
   "Platform":"linux/amd64"
}
==> storage-provisioner [1a6feb0feced] <==
E0622 20:48:45.221595       1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: v1.PersistentVolumeClaimList: Items: []v1.PersistentVolumeClaim: v1.PersistentVolumeClaim: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 1443 ...:{},"k:{\"... at {"kind":"PersistentVolumeClaimList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/persistentvolumeclaims","resourceVersion":"1714"},"items":[{"metadata":{"name":"elasticsearch-data-elasticsearch-es-default-0","namespace":"akashascrolls","selfLink":"/api/v1/namespaces/akashascrolls/persistentvolumeclaims/elasticsearch-data-elasticsearch-es-default-0","uid":"72b99b3f-4989-4a5a-8be3-796bed9f3265","resourceVersion":"1714","creationTimestamp":"2020-06-22T20:38:29Z","labels":{"common.k8s.elastic.co/type":"elasticsearch","elasticsearch.k8s.elastic.co/cluster-name":"elasticsearch","elasticsearch.k8s.elastic.co/statefulset-name":"elasticsearch-es-default"},"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"k8s.io/minikube-hostpath"},"ownerReferences":[{"apiVersion":"elasticsearch.k8s.elastic.co/v1","kind":"Elasticsearch","name":"elasticsearch","uid":"ec3d15b4-4811-4af9-9617-c77aee501a80","controller":true,"blockOwnerDeletion":false}],"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-06-22T20:38:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{}},"f:labels":{".":{},"f:common.k8s.elastic.co/type":{},"f:elasticsearch.k8s.elastic.co/cluster-name":{},"f:elasticsearch.k8s.elastic.co/statefulset-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec3d15b4-4811-4af9-9617-c77aee501a80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}},"f:status":{"f:phase":{}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard","volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}

I'm using the docker driver. I have a mysql deployment with the following PVC and that one gets bound.

@gordillo-ramon
Copy link

I get you hit straight to the point!

The issue is easier if you look at the yaml file, so if you do a kubectl get pvc <name> -o yaml you get in the managed fields something like:

        f:ownerReferences:
          .: {}
          k:{"uid":"690cb65e-c608-4995-97ce-68c7eb7ce3a6"}:

which if you translate into json, example kubectl get pvc <name> -o json get:

                        "f:ownerReferences": {
                            ".": {},
                            "k:{\"uid\":\"39a5cd2c-ad5d-4915-800d-fb27bc2884da\"}": {
                                ".": {},

Which is valid from json perspective, but it seems readObjectFieldAsBytes is not escaping correctly the "" in the field name.

It is indeed #7218

@macdi
Copy link

macdi commented Jul 3, 2020

Hello,

I had the same issue while trying to deploy Elasticsearch on Minikube following this guide:
https://www.elastic.co/blog/getting-started-with-elastic-cloud-on-kubernetes-deployment

This is my configuration:

minikube version: v1.11.0
commit: 57e2f55

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Thank you very much in advance for your help.

@medyagh medyagh changed the title pod has unbound immediate PersistentVolumeClaims - when installing mongodb document how to use PV in minikube Jul 29, 2020
@medyagh medyagh added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. and removed kind/bug Categorizes issue or PR as related to a bug. labels Jul 29, 2020
@medyagh
Copy link
Member

medyagh commented Jul 29, 2020

we have an integration test for PV we should ensure that it covers this case

@chris-downs
Copy link

chris-downs commented Aug 4, 2020

I'm running into this issue with minikube v1.12.1 running k8s 1.18.3:

Warning FailedScheduling 7m51s (x3 over 7m52s) default-scheduler running "VolumeBinding" filter plugin for pod "roach-test-cockroachdb-2": pod has unbound immediate PersistentVolumeClaims

It looks like it's due to permissions issues:

==> storage-provisioner [7e735da44478] <== ... E0804 19:42:56.707424 1 controller.go:682] Error watching for provisioning success, can't provision for claim "default/datadir-roach-test-cockroachdb-2": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "default"

You can repro by installing cockroachdb via helm: https://www.cockroachlabs.com/docs/stable/orchestrate-a-local-cluster-with-kubernetes.html

@jeffwalsh
Copy link

also experiencing this with any helm chart that requires a PV (redis-ha, rabbitmq-ha, prometheus, grafana).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 13, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 13, 2020
@sharifelgamal sharifelgamal added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Dec 16, 2020
@graste
Copy link
Contributor

graste commented Jan 28, 2021

I'm having this after upgrading minikube from 1.14.2 to 1.17.0. Using virtualbox. Any PV/PVC doesn't work - default helm charts that used to work, are not working. Tried start/stop/delete and kubernetes version 1.18.15 and 1.20.2 in minikube. Not working ("unbound immediate PersistentVolumeClaims"). Deleting the box and using the same helm charts/values with kubernetes version 1.17.17 on minikube 1.17.0 works.

@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels May 26, 2021
@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jun 2, 2021
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Sep 22, 2021
@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Oct 13, 2021
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Feb 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests