Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"pod has unbound immediate PersistentVolumeClaims" for helm charts(redis, mysql, postgresql) #3869

Closed
wkexperimental opened this issue Mar 13, 2019 · 25 comments
Labels
area/storage storage bugs kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@wkexperimental
Copy link

Hi, I tried to do helm install stable/postgresql. but it gave me "pod has unbound immediate PersistentVolumeClaims" using minikube v0.35.0, worked just fine before on minikube v0.33(tried redis and mysql chart too, gave same error). I did reinstall minikube. Also, tried to start with fresh minikube. I don't understand where I'm doing it wrong. Many thanks for the help. Here are some details:

kubectl version:

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

minikube addons list:

- addon-manager: enabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- gvisor: disabled
- heapster: disabled
- ingress: disabled
- logviewer: disabled
- metrics-server: disabled
- nvidia-driver-installer: disabled
- nvidia-gpu-device-plugin: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
- storage-provisioner-gluster: disabled

minikube config view:

- cpus: 4
- dashboard: true
- default-storageclass: true
- memory: 5000
- vm-driver: hyperkit

minikube logs:

==> coredns <==
.:53
2019-03-13T15:55:21.676Z [INFO] CoreDNS-1.2.6
2019-03-13T15:55:21.676Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
E0313 15:55:46.676447       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0313 15:55:46.677030       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0313 15:55:46.677247       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
==> kube-apiserver <==
I0313 15:55:06.541132       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0313 15:55:06.579372       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0313 15:55:06.618936       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0313 15:55:06.660102       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0313 15:55:06.700907       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0313 15:55:06.741198       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0313 15:55:06.780863       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0313 15:55:06.821581       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0313 15:55:06.862643       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0313 15:55:06.902315       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0313 15:55:06.941124       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0313 15:55:06.981981       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0313 15:55:07.022289       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0313 15:55:07.061306       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0313 15:55:07.100915       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0313 15:55:07.139972       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0313 15:55:07.181117       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0313 15:55:07.219102       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0313 15:55:07.259346       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0313 15:55:07.305217       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0313 15:55:07.340713       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0313 15:55:07.380632       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0313 15:55:07.421168       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0313 15:55:07.460340       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0313 15:55:07.504424       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0313 15:55:07.541185       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0313 15:55:07.581258       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0313 15:55:07.617420       1 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0313 15:55:07.625603       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0313 15:55:07.661003       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0313 15:55:07.704460       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0313 15:55:07.743268       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0313 15:55:07.781466       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0313 15:55:07.822209       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0313 15:55:07.861887       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0313 15:55:07.898431       1 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0313 15:55:07.902020       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0313 15:55:07.939781       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0313 15:55:07.980305       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0313 15:55:08.020818       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0313 15:55:08.063174       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0313 15:55:08.104255       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0313 15:55:08.530243       1 controller.go:608] quota admission added evaluator for: serviceaccounts
I0313 15:55:09.710260       1 controller.go:608] quota admission added evaluator for: deployments.apps
I0313 15:55:09.747908       1 controller.go:608] quota admission added evaluator for: daemonsets.apps
I0313 15:55:14.960886       1 controller.go:608] quota admission added evaluator for: replicasets.apps
I0313 15:55:15.383087       1 controller.go:608] quota admission added evaluator for: controllerrevisions.apps
I0313 15:56:37.891094       1 controller.go:608] quota admission added evaluator for: deployments.extensions
I0313 15:58:45.557811       1 controller.go:608] quota admission added evaluator for: statefulsets.apps
E0313 15:58:45.747512       1 upgradeaware.go:343] Error proxying data from client to backend: read tcp 192.168.64.36:8443->192.168.64.1:59221: read: connection reset by peer
==> kube-scheduler <==
E0313 15:55:03.149821       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0313 15:55:03.152460       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0313 15:55:03.153884       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0313 15:55:03.154899       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0313 15:55:04.140252       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0313 15:55:04.141590       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0313 15:55:04.142755       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0313 15:55:04.145361       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0313 15:55:04.148096       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0313 15:55:04.150614       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0313 15:55:04.151751       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0313 15:55:04.153593       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0313 15:55:04.154826       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0313 15:55:04.155821       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0313 15:55:05.143305       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0313 15:55:05.143871       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0313 15:55:05.144444       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0313 15:55:05.147601       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0313 15:55:05.149795       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0313 15:55:05.152716       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0313 15:55:05.153547       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0313 15:55:05.155115       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0313 15:55:05.156495       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0313 15:55:05.157515       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0313 15:55:06.145088       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0313 15:55:06.146103       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0313 15:55:06.147275       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0313 15:55:06.149065       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0313 15:55:06.150735       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0313 15:55:06.153819       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0313 15:55:06.155147       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0313 15:55:06.156827       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0313 15:55:06.157801       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0313 15:55:06.158701       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0313 15:55:08.027463       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0313 15:55:08.127910       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0313 15:55:08.128159       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
I0313 15:55:08.135936       1 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
E0313 15:58:45.588430       1 factory.go:1519] Error scheduling default/early-cow-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0313 15:58:45.596130       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0313 15:58:45.596506       1 factory.go:1519] Error scheduling default/early-cow-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0313 15:58:45.600949       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0313 15:58:45.601533       1 factory.go:1519] Error scheduling default/early-cow-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0313 15:58:45.609572       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0313 15:58:45.615592       1 factory.go:1519] Error scheduling default/early-cow-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0313 15:58:45.623997       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0313 15:58:45.628000       1 factory.go:1519] Error scheduling default/early-cow-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0313 15:58:45.636867       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0313 15:58:45.637308       1 factory.go:1519] Error scheduling default/early-cow-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0313 15:58:45.647360       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
==> kubelet <==
-- Logs begin at Wed 2019-03-13 15:53:34 UTC, end at Wed 2019-03-13 16:02:32 UTC. --
Mar 13 15:54:59 minikube kubelet[2736]: E0313 15:54:59.735718    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:54:59 minikube kubelet[2736]: E0313 15:54:59.738561    2736 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158b8fb0efb535bc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1a67d1a736f9bc, ext:341027708, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1a67d1b341fd56, ext:543076124, loc:(*time.Location)(0x71d6440)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 13 15:54:59 minikube kubelet[2736]: E0313 15:54:59.836863    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:54:59 minikube kubelet[2736]: E0313 15:54:59.937687    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.038441    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.138949    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.140785    2736 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158b8fb0efb4afc2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1a67d1a73673c2, ext:340993405, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1a67d1b3582fe5, ext:544530863, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.239400    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.339869    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.440726    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.540916    2736 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158b8fb0efb495ce", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1a67d1a73659ce, ext:340986767, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1a67d1b358173a, ext:544524551, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.541278    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.642393    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.743165    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.844637    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.938948    2736 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158b8fb0efb495ce", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1a67d1a73659ce, ext:340986767, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1a67d1b35953e5, ext:544605605, loc:(*time.Location)(0x71d6440)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 13 15:55:00 minikube kubelet[2736]: E0313 15:55:00.945246    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:01 minikube kubelet[2736]: E0313 15:55:01.046587    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:01 minikube kubelet[2736]: E0313 15:55:01.146986    2736 kubelet.go:2266] node "minikube" not found
Mar 13 15:55:01 minikube kubelet[2736]: I0313 15:55:01.232801    2736 kubelet_node_status.go:75] Successfully registered node minikube
Mar 13 15:55:15 minikube kubelet[2736]: I0313 15:55:15.543251    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/647786aa-45a8-11e9-9f54-42bfe1a8eb9c-kube-proxy") pod "kube-proxy-54kld" (UID: "647786aa-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:15 minikube kubelet[2736]: I0313 15:55:15.543497    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/647786aa-45a8-11e9-9f54-42bfe1a8eb9c-xtables-lock") pod "kube-proxy-54kld" (UID: "647786aa-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:15 minikube kubelet[2736]: I0313 15:55:15.543585    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/643b7ef8-45a8-11e9-9f54-42bfe1a8eb9c-config-volume") pod "coredns-86c58d9df4-l9nfg" (UID: "643b7ef8-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:15 minikube kubelet[2736]: I0313 15:55:15.543616    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-r6qt5" (UniqueName: "kubernetes.io/secret/64398596-45a8-11e9-9f54-42bfe1a8eb9c-coredns-token-r6qt5") pod "coredns-86c58d9df4-qgn5b" (UID: "64398596-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:15 minikube kubelet[2736]: I0313 15:55:15.543638    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/647786aa-45a8-11e9-9f54-42bfe1a8eb9c-lib-modules") pod "kube-proxy-54kld" (UID: "647786aa-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:15 minikube kubelet[2736]: I0313 15:55:15.543705    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-k2vvl" (UniqueName: "kubernetes.io/secret/647786aa-45a8-11e9-9f54-42bfe1a8eb9c-kube-proxy-token-k2vvl") pod "kube-proxy-54kld" (UID: "647786aa-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:15 minikube kubelet[2736]: I0313 15:55:15.543732    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/64398596-45a8-11e9-9f54-42bfe1a8eb9c-config-volume") pod "coredns-86c58d9df4-qgn5b" (UID: "64398596-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:15 minikube kubelet[2736]: I0313 15:55:15.543752    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-r6qt5" (UniqueName: "kubernetes.io/secret/643b7ef8-45a8-11e9-9f54-42bfe1a8eb9c-coredns-token-r6qt5") pod "coredns-86c58d9df4-l9nfg" (UID: "643b7ef8-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:16 minikube kubelet[2736]: W0313 15:55:16.374806    2736 pod_container_deletor.go:75] Container "91db0f73a978f41bdd723f7cf003b21d01e9e76da104b9c7cada941c0de48434" not found in pod's containers
Mar 13 15:55:16 minikube kubelet[2736]: W0313 15:55:16.383840    2736 pod_container_deletor.go:75] Container "5c92f5b40eba036d5d2b4b162d6da84484fa6782326da8dc7406e698c904d963" not found in pod's containers
Mar 13 15:55:16 minikube kubelet[2736]: I0313 15:55:16.459388    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-2wczv" (UniqueName: "kubernetes.io/secret/6504feaf-45a8-11e9-9f54-42bfe1a8eb9c-default-token-2wczv") pod "kubernetes-dashboard-ccc79bfc9-p66fx" (UID: "6504feaf-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:16 minikube kubelet[2736]: I0313 15:55:16.459484    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/650e99c9-45a8-11e9-9f54-42bfe1a8eb9c-tmp") pod "storage-provisioner" (UID: "650e99c9-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:16 minikube kubelet[2736]: I0313 15:55:16.459516    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-ddbrd" (UniqueName: "kubernetes.io/secret/650e99c9-45a8-11e9-9f54-42bfe1a8eb9c-storage-provisioner-token-ddbrd") pod "storage-provisioner" (UID: "650e99c9-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:55:16 minikube kubelet[2736]: W0313 15:55:16.477801    2736 pod_container_deletor.go:75] Container "d87ae10767ded82ead5f69d0fe4425b7a9b20a3b80826285a5bc0b30a9d0a4be" not found in pod's containers
Mar 13 15:55:16 minikube kubelet[2736]: W0313 15:55:16.696371    2736 container.go:409] Failed to create summary reader for "/system.slice/run-r1c8b403853da431895507d2c0c635ebc.scope": none of the resources are being tracked.
Mar 13 15:55:16 minikube kubelet[2736]: W0313 15:55:16.698490    2736 container.go:409] Failed to create summary reader for "/system.slice/run-rd1fd0f9d580547a79df9e13eb682cd23.scope": none of the resources are being tracked.
Mar 13 15:55:36 minikube kubelet[2736]: E0313 15:55:36.725795    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:55:37 minikube kubelet[2736]: E0313 15:55:37.742563    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:55:42 minikube kubelet[2736]: E0313 15:55:42.119479    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:55:55 minikube kubelet[2736]: E0313 15:55:55.954521    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:56:02 minikube kubelet[2736]: E0313 15:56:02.120013    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:56:19 minikube kubelet[2736]: E0313 15:56:19.218665    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:56:22 minikube kubelet[2736]: E0313 15:56:22.117871    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:56:35 minikube kubelet[2736]: E0313 15:56:35.548271    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:56:38 minikube kubelet[2736]: I0313 15:56:38.030730    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-2wczv" (UniqueName: "kubernetes.io/secret/95a88ad0-45a8-11e9-9f54-42bfe1a8eb9c-default-token-2wczv") pod "tiller-deploy-6d6cc8dcb5-dn9th" (UID: "95a88ad0-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:56:38 minikube kubelet[2736]: W0313 15:56:38.743842    2736 pod_container_deletor.go:75] Container "3aee2d8487d56faf69eeed4eebf50dc52712ce8cf32d3fcaf453ee356dfd0727" not found in pod's containers
Mar 13 15:56:50 minikube kubelet[2736]: E0313 15:56:50.549643    2736 pod_workers.go:190] Error syncing pod 6504feaf-45a8-11e9-9f54-42bfe1a8eb9c ("kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-p66fx_kube-system(6504feaf-45a8-11e9-9f54-42bfe1a8eb9c)"
Mar 13 15:58:45 minikube kubelet[2736]: I0313 15:58:45.811259    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-drndt" (UniqueName: "kubernetes.io/secret/e1c0948d-45a8-11e9-9f54-42bfe1a8eb9c-default-token-drndt") pod "early-cow-postgresql-0" (UID: "e1c0948d-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:58:45 minikube kubelet[2736]: I0313 15:58:45.811924    2736 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-e1bfd380-45a8-11e9-9f54-42bfe1a8eb9c" (UniqueName: "kubernetes.io/host-path/e1c0948d-45a8-11e9-9f54-42bfe1a8eb9c-pvc-e1bfd380-45a8-11e9-9f54-42bfe1a8eb9c") pod "early-cow-postgresql-0" (UID: "e1c0948d-45a8-11e9-9f54-42bfe1a8eb9c")
Mar 13 15:58:46 minikube kubelet[2736]: W0313 15:58:46.517591    2736 pod_container_deletor.go:75] Container "9dddae187443dfff2bc3998845354e53f4f20476ac4534be2bb00778c441a833" not found in pod's containers

kubectl get pv,pvc:

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS   REASON   AGE
persistentvolume/pvc-e1bfd380-45a8-11e9-9f54-42bfe1a8eb9c   8Gi        RWO            Delete           Bound    default/data-early-cow-postgresql-0   standard                5m2s

NAME                                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/data-early-cow-postgresql-0   Bound    pvc-e1bfd380-45a8-11e9-9f54-42bfe1a8eb9c   8Gi        RWO            standard       5m2s

kubectl describe pv,pvc:

Name:            pvc-e1bfd380-45a8-11e9-9f54-42bfe1a8eb9c
Labels:          <none>
Annotations:     hostPathProvisionerIdentity: 69aee956-45a8-11e9-9b8e-42bfe1a8eb9c
                 pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    standard
Status:          Bound
Claim:           default/data-early-cow-postgresql-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        8Gi
Node Affinity:   <none>
Message:         
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/hostpath-provisioner/pvc-e1bfd380-45a8-11e9-9f54-42bfe1a8eb9c
    HostPathType:  
Events:            <none>


Name:          data-early-cow-postgresql-0
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        pvc-e1bfd380-45a8-11e9-9f54-42bfe1a8eb9c
Labels:        app=postgresql
               release=early-cow
               role=master
Annotations:   control-plane.alpha.kubernetes.io/leader:
                 {"holderIdentity":"69aee9c1-45a8-11e9-9b8e-42bfe1a8eb9c","leaseDurationSeconds":15,"acquireTime":"2019-03-13T15:58:45Z","renewTime":"2019-...
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      8Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age    From                                                           Message
  ----       ------                 ----   ----                                                           -------
  Normal     Provisioning           5m45s  k8s.io/minikube-hostpath 69aee9c1-45a8-11e9-9b8e-42bfe1a8eb9c  External provisioner is provisioning volume for claim "default/data-early-cow-postgresql-0"
  Normal     ProvisioningSucceeded  5m45s  k8s.io/minikube-hostpath 69aee9c1-45a8-11e9-9b8e-42bfe1a8eb9c  Successfully provisioned volume pvc-e1bfd380-45a8-11e9-9f54-42bfe1a8eb9c
Mounted By:  early-cow-postgresql-0

kubectl get all:

NAME                         READY   STATUS     RESTARTS   AGE
pod/early-cow-postgresql-0   0/1     Init:0/1   0          6m58s

NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/early-cow-postgresql            ClusterIP   10.98.244.231   <none>        5432/TCP   6m58s
service/early-cow-postgresql-headless   ClusterIP   None            <none>        5432/TCP   6m58s
service/kubernetes                      ClusterIP   10.96.0.1       <none>        443/TCP    10m

NAME                                    READY   AGE
statefulset.apps/early-cow-postgresql   0/1     6m58s

kubectl describe sc:

Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.beta.kubernetes.io/is-default-class=true
Provisioner:           k8s.io/minikube-hostpath
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

kubectl describe statefulset:

Name:               early-cow-postgresql
Namespace:          default
CreationTimestamp:  Wed, 13 Mar 2019 22:58:45 +0700
Selector:           app=postgresql,release=early-cow,role=master
Labels:             app=postgresql
                    chart=postgresql-3.13.1
                    heritage=Tiller
                    release=early-cow
Annotations:        <none>
Replicas:           824640955904 desired | 1 total
Update Strategy:    RollingUpdate
Pods Status:        0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=postgresql
           chart=postgresql-3.13.1
           heritage=Tiller
           release=early-cow
           role=master
  Init Containers:
   init-chmod-data:
    Image:      docker.io/bitnami/minideb:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      chown -R 1001:1001 /bitnami
      if [ -d /bitnami/postgresql/data ]; then
        chmod  0700 /bitnami/postgresql/data;
      fi
      
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /bitnami/postgresql from data (rw)
  Containers:
   early-cow-postgresql:
    Image:      docker.io/bitnami/postgresql:10.7.0
    Port:       5432/TCP
    Host Port:  0/TCP
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [sh -c exec pg_isready -U "postgres" -h localhost] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [sh -c exec pg_isready -U "postgres" -h localhost] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      PGDATA:             /bitnami/postgresql
      POSTGRES_USER:      postgres
      POSTGRES_PASSWORD:  <set to the key 'postgresql-password' in secret 'early-cow-postgresql'>  Optional: false
    Mounts:
      /bitnami/postgresql from data (rw)
  Volumes:  <none>
Volume Claims:
  Name:          data
  StorageClass:  
  Labels:        <none>
  Annotations:   <none>
  Capacity:      8Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  8m4s  statefulset-controller  create Claim data-early-cow-postgresql-0 Pod early-cow-postgresql-0 in StatefulSet early-cow-postgresql success
  Normal  SuccessfulCreate  8m4s  statefulset-controller  create Pod early-cow-postgresql-0 in StatefulSet early-cow-postgresql successful

kubectl describe pods:

Name:               early-cow-postgresql-0
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               minikube/192.168.64.36
Start Time:         Wed, 13 Mar 2019 22:58:45 +0700
Labels:             app=postgresql
                    chart=postgresql-3.13.1
                    controller-revision-hash=early-cow-postgresql-84ff598768
                    heritage=Tiller
                    release=early-cow
                    role=master
                    statefulset.kubernetes.io/pod-name=early-cow-postgresql-0
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      StatefulSet/early-cow-postgresql
Init Containers:
  init-chmod-data:
    Container ID:  
    Image:         docker.io/bitnami/minideb:latest
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      chown -R 1001:1001 /bitnami
      if [ -d /bitnami/postgresql/data ]; then
        chmod  0700 /bitnami/postgresql/data;
      fi
      
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /bitnami/postgresql from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-drndt (ro)
Containers:
  early-cow-postgresql:
    Container ID:   
    Image:          docker.io/bitnami/postgresql:10.7.0
    Image ID:       
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [sh -c exec pg_isready -U "postgres" -h localhost] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [sh -c exec pg_isready -U "postgres" -h localhost] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      PGDATA:             /bitnami/postgresql
      POSTGRES_USER:      postgres
      POSTGRES_PASSWORD:  <set to the key 'postgresql-password' in secret 'early-cow-postgresql'>  Optional: false
    Mounts:
      /bitnami/postgresql from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-drndt (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-early-cow-postgresql-0
    ReadOnly:   false
  default-token-drndt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-drndt
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  9m9s (x6 over 9m9s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         9m9s                 default-scheduler  Successfully assigned default/early-cow-postgresql-0 to minikube
  Normal   Pulling           9m8s                 kubelet, minikube  pulling image "docker.io/bitnami/minideb:latest"
@tstromberg
Copy link
Contributor

This sounds bad. I haven't had a chance to sort out the reproduction steps for it yet however. To my knowledge, nothing has changed with PVC's recently.

https://stackoverflow.com/questions/54923806/why-do-i-get-unbound-immediate-persistentvolumeclaims-on-minikube has some steps which might be helpful. Do you mind looking through it to see if there are any hints which may help uncover the root cause?

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. area/storage storage bugs priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels Mar 13, 2019
@wkexperimental
Copy link
Author

wkexperimental commented Mar 14, 2019

Hi, I revisited simple config files that I have before(worked fine before). I did change the pvc config accessMode to ReadWriteMany and retry too, following https://stackoverflow.com/questions/54923806/why-do-i-get-unbound-immediate-persistentvolumeclaims-on-minikube.

pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-disk
  labels:
    stage: production
    name: database 
    app: postgres
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: database
spec:
  ports:
  - port: 5432
    targetPort: 5432
    protocol: TCP
  selector:
    stage: production
    name: database 
    app: postgres

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: database
  labels:
    stage: production
    name: database 
    app: postgres
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      stage: production
      name: database
      app: postgres
  template:
    metadata:
      labels:
        stage: production
        name: database 
        app: postgres
    spec:
      containers:
        - name: postgres
          image: 'postgres:latest'
          env:
            - name: POSTGRES_USER
              value: "postgres"
            - name: POSTGRES_PASSWORD
              value: "postgres"
          ports:
            - name: postgres-5432
              containerPort: 5432
          volumeMounts:
            - name: postgres-disk
              readOnly: false
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: postgres-disk
          persistentVolumeClaim:
            claimName: postgres-disk

Gave me same error, "pod has unbound immediate PersistentVolumeClaims" when i did kubectl apply -f . The pv and pvc seems initiated just fine, and the storageClass also seems configured properly. Then I tried to remove just deployment.yaml(kubectl delete -f deployments.yaml) and do kubectl apply -f deployment.yaml. Still to no avail.

minikube logs:

==> coredns <==
.:53
2019-03-14T05:15:17.151Z [INFO] CoreDNS-1.2.6
2019-03-14T05:15:17.151Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
[INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
==> kube-apiserver <==
I0314 05:15:06.347552       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0314 05:15:06.387550       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0314 05:15:06.429152       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0314 05:15:06.469216       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0314 05:15:06.505912       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0314 05:15:06.546627       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0314 05:15:06.587338       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0314 05:15:06.625733       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0314 05:15:06.669953       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0314 05:15:06.706985       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0314 05:15:06.749355       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0314 05:15:06.788976       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0314 05:15:06.827838       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0314 05:15:06.867626       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0314 05:15:06.906889       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0314 05:15:06.948418       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0314 05:15:06.987700       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0314 05:15:07.027286       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0314 05:15:07.067012       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0314 05:15:07.107692       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0314 05:15:07.146590       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0314 05:15:07.186746       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0314 05:15:07.225954       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0314 05:15:07.267195       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0314 05:15:07.308731       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0314 05:15:07.347248       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0314 05:15:07.388265       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0314 05:15:07.429569       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0314 05:15:07.468253       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0314 05:15:07.508142       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0314 05:15:07.544422       1 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0314 05:15:07.547810       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0314 05:15:07.587264       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0314 05:15:07.628718       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0314 05:15:07.666826       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0314 05:15:07.708358       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0314 05:15:07.749732       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0314 05:15:07.787684       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0314 05:15:07.823472       1 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0314 05:15:07.825568       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0314 05:15:07.865570       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0314 05:15:07.906570       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0314 05:15:07.946500       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0314 05:15:07.987373       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0314 05:15:08.026580       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0314 05:15:08.376573       1 controller.go:608] quota admission added evaluator for: serviceaccounts
I0314 05:15:09.219332       1 controller.go:608] quota admission added evaluator for: deployments.apps
I0314 05:15:09.258642       1 controller.go:608] quota admission added evaluator for: daemonsets.apps
I0314 05:15:14.786184       1 controller.go:608] quota admission added evaluator for: controllerrevisions.apps
I0314 05:15:14.933609       1 controller.go:608] quota admission added evaluator for: replicasets.apps
==> kube-scheduler <==
E0314 05:15:03.106317       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0314 05:15:03.107481       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0314 05:15:04.095349       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0314 05:15:04.097005       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0314 05:15:04.098649       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0314 05:15:04.099584       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0314 05:15:04.100620       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0314 05:15:04.102126       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0314 05:15:04.103330       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0314 05:15:04.106498       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0314 05:15:04.107570       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0314 05:15:04.108597       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0314 05:15:05.097349       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0314 05:15:05.099426       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0314 05:15:05.100795       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0314 05:15:05.101822       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0314 05:15:05.103513       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0314 05:15:05.105180       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0314 05:15:05.105216       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0314 05:15:05.109026       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0314 05:15:05.109031       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0314 05:15:05.109973       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0314 05:15:06.098708       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0314 05:15:06.101101       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0314 05:15:06.102097       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0314 05:15:06.104229       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0314 05:15:06.104768       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0314 05:15:06.106788       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0314 05:15:06.108498       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0314 05:15:06.111059       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0314 05:15:06.111676       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0314 05:15:06.112827       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0314 05:15:07.934482       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0314 05:15:08.035004       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0314 05:15:08.035150       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
I0314 05:15:08.041849       1 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
E0314 05:17:48.820880       1 factory.go:1519] Error scheduling default/database-dc89c578-c6gt7: pod has unbound immediate PersistentVolumeClaims; retrying
E0314 05:17:48.830529       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0314 05:17:48.831308       1 factory.go:1519] Error scheduling default/database-dc89c578-c6gt7: pod has unbound immediate PersistentVolumeClaims; retrying
E0314 05:17:48.840173       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0314 05:17:48.841465       1 factory.go:1519] Error scheduling default/database-dc89c578-c6gt7: pod has unbound immediate PersistentVolumeClaims; retrying
E0314 05:17:48.854614       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0314 05:17:48.855336       1 factory.go:1519] Error scheduling default/database-dc89c578-c6gt7: pod has unbound immediate PersistentVolumeClaims; retrying
E0314 05:17:48.878415       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0314 05:17:48.879434       1 factory.go:1519] Error scheduling default/database-dc89c578-c6gt7: pod has unbound immediate PersistentVolumeClaims; retrying
E0314 05:17:48.886378       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0314 05:17:48.906440       1 factory.go:1519] Error scheduling default/database-dc89c578-c6gt7: pod has unbound immediate PersistentVolumeClaims; retrying
E0314 05:17:48.923341       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0314 05:17:48.923800       1 factory.go:1519] Error scheduling default/database-dc89c578-c6gt7: pod has unbound immediate PersistentVolumeClaims; retrying
E0314 05:17:48.939614       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
==> kubelet <==
-- Logs begin at Thu 2019-03-14 05:13:33 UTC, end at Thu 2019-03-14 05:21:46 UTC. --
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.067705    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.169174    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.269668    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.370609    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.392885    2726 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158bbb58cdc853cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1a96b1a65717cd, ext:306848862, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1a96b1a96a9101, ext:358456726, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.471872    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.572840    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.673312    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.775035    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.794332    2726 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158bbb58cdc8649a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1a96b1a657289a, ext:306853162, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1a96b1a96ab563, ext:358466050, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.875560    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:00 minikube kubelet[2726]: E0314 05:15:00.976255    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:01 minikube kubelet[2726]: E0314 05:15:01.078583    2726 kubelet.go:2266] node "minikube" not found
Mar 14 05:15:01 minikube kubelet[2726]: I0314 05:15:01.083622    2726 kubelet_node_status.go:75] Successfully registered node minikube
Mar 14 05:15:14 minikube kubelet[2726]: I0314 05:15:14.950853    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/2658cbb6-4618-11e9-915b-32024e772b45-xtables-lock") pod "kube-proxy-5bkzc" (UID: "2658cbb6-4618-11e9-915b-32024e772b45")
Mar 14 05:15:14 minikube kubelet[2726]: I0314 05:15:14.950927    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/2658cbb6-4618-11e9-915b-32024e772b45-lib-modules") pod "kube-proxy-5bkzc" (UID: "2658cbb6-4618-11e9-915b-32024e772b45")
Mar 14 05:15:14 minikube kubelet[2726]: I0314 05:15:14.950954    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2658cbb6-4618-11e9-915b-32024e772b45-kube-proxy") pod "kube-proxy-5bkzc" (UID: "2658cbb6-4618-11e9-915b-32024e772b45")
Mar 14 05:15:14 minikube kubelet[2726]: I0314 05:15:14.950975    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-76kgn" (UniqueName: "kubernetes.io/secret/2658cbb6-4618-11e9-915b-32024e772b45-kube-proxy-token-76kgn") pod "kube-proxy-5bkzc" (UID: "2658cbb6-4618-11e9-915b-32024e772b45")
Mar 14 05:15:14 minikube kubelet[2726]: E0314 05:15:14.966436    2726 reflector.go:134] object-"kube-system"/"coredns-token-qdcj6": Failed to list *v1.Secret: secrets "coredns-token-qdcj6" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no path found to object
Mar 14 05:15:14 minikube kubelet[2726]: E0314 05:15:14.966556    2726 reflector.go:134] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no path found to object
Mar 14 05:15:15 minikube kubelet[2726]: I0314 05:15:15.051337    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-qdcj6" (UniqueName: "kubernetes.io/secret/266dbfdf-4618-11e9-915b-32024e772b45-coredns-token-qdcj6") pod "coredns-86c58d9df4-nwtcv" (UID: "266dbfdf-4618-11e9-915b-32024e772b45")
Mar 14 05:15:15 minikube kubelet[2726]: I0314 05:15:15.051412    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/266f5467-4618-11e9-915b-32024e772b45-config-volume") pod "coredns-86c58d9df4-jm8dz" (UID: "266f5467-4618-11e9-915b-32024e772b45")
Mar 14 05:15:15 minikube kubelet[2726]: I0314 05:15:15.051495    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-qdcj6" (UniqueName: "kubernetes.io/secret/266f5467-4618-11e9-915b-32024e772b45-coredns-token-qdcj6") pod "coredns-86c58d9df4-jm8dz" (UID: "266f5467-4618-11e9-915b-32024e772b45")
Mar 14 05:15:15 minikube kubelet[2726]: I0314 05:15:15.051530    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/266dbfdf-4618-11e9-915b-32024e772b45-config-volume") pod "coredns-86c58d9df4-nwtcv" (UID: "266dbfdf-4618-11e9-915b-32024e772b45")
Mar 14 05:15:16 minikube kubelet[2726]: I0314 05:15:16.059142    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-g6cxv" (UniqueName: "kubernetes.io/secret/270e485b-4618-11e9-915b-32024e772b45-default-token-g6cxv") pod "kubernetes-dashboard-ccc79bfc9-lzdm5" (UID: "270e485b-4618-11e9-915b-32024e772b45")
Mar 14 05:15:16 minikube kubelet[2726]: I0314 05:15:16.159795    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/27193718-4618-11e9-915b-32024e772b45-tmp") pod "storage-provisioner" (UID: "27193718-4618-11e9-915b-32024e772b45")
Mar 14 05:15:16 minikube kubelet[2726]: I0314 05:15:16.160004    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-h56zj" (UniqueName: "kubernetes.io/secret/27193718-4618-11e9-915b-32024e772b45-storage-provisioner-token-h56zj") pod "storage-provisioner" (UID: "27193718-4618-11e9-915b-32024e772b45")
Mar 14 05:15:16 minikube kubelet[2726]: W0314 05:15:16.904031    2726 container.go:409] Failed to create summary reader for "/system.slice/run-r397ed3de78a8482f86e19b26d78963d9.scope": none of the resources are being tracked.
Mar 14 05:15:16 minikube kubelet[2726]: W0314 05:15:16.981716    2726 pod_container_deletor.go:75] Container "e0cad8da786870ba189c3ff4e9fc15e6e31ad5cde4e6d5987bf95698da40da04" not found in pod's containers
Mar 14 05:15:16 minikube kubelet[2726]: W0314 05:15:16.985625    2726 pod_container_deletor.go:75] Container "9c870b392535bee3b7d47c8861929e74dae127c6edbee52837058b3ee723ec8f" not found in pod's containers
Mar 14 05:15:30 minikube kubelet[2726]: E0314 05:15:30.393488    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:15:31 minikube kubelet[2726]: E0314 05:15:31.412005    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:15:38 minikube kubelet[2726]: E0314 05:15:38.267412    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:15:51 minikube kubelet[2726]: E0314 05:15:51.638842    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:15:58 minikube kubelet[2726]: E0314 05:15:58.266965    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:16:12 minikube kubelet[2726]: E0314 05:16:12.885295    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:16:18 minikube kubelet[2726]: E0314 05:16:18.266787    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:16:29 minikube kubelet[2726]: E0314 05:16:29.583627    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:16:43 minikube kubelet[2726]: E0314 05:16:43.583256    2726 pod_workers.go:190] Error syncing pod 270e485b-4618-11e9-915b-32024e772b45 ("kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-lzdm5_kube-system(270e485b-4618-11e9-915b-32024e772b45)"
Mar 14 05:17:49 minikube kubelet[2726]: I0314 05:17:49.096241    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-82232042-4618-11e9-915b-32024e772b45" (UniqueName: "kubernetes.io/host-path/8225739b-4618-11e9-915b-32024e772b45-pvc-82232042-4618-11e9-915b-32024e772b45") pod "database-dc89c578-c6gt7" (UID: "8225739b-4618-11e9-915b-32024e772b45")
Mar 14 05:17:49 minikube kubelet[2726]: I0314 05:17:49.096323    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jc92s" (UniqueName: "kubernetes.io/secret/8225739b-4618-11e9-915b-32024e772b45-default-token-jc92s") pod "database-dc89c578-c6gt7" (UID: "8225739b-4618-11e9-915b-32024e772b45")
Mar 14 05:19:27 minikube kubelet[2726]: I0314 05:19:27.379418    2726 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-jc92s" (UniqueName: "kubernetes.io/secret/8225739b-4618-11e9-915b-32024e772b45-default-token-jc92s") pod "8225739b-4618-11e9-915b-32024e772b45" (UID: "8225739b-4618-11e9-915b-32024e772b45")
Mar 14 05:19:27 minikube kubelet[2726]: I0314 05:19:27.381682    2726 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8225739b-4618-11e9-915b-32024e772b45-pvc-82232042-4618-11e9-915b-32024e772b45" (OuterVolumeSpecName: "postgres-disk") pod "8225739b-4618-11e9-915b-32024e772b45" (UID: "8225739b-4618-11e9-915b-32024e772b45"). InnerVolumeSpecName "pvc-82232042-4618-11e9-915b-32024e772b45". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 14 05:19:27 minikube kubelet[2726]: I0314 05:19:27.382105    2726 reconciler.go:181] operationExecutor.UnmountVolume started for volume "postgres-disk" (UniqueName: "kubernetes.io/host-path/8225739b-4618-11e9-915b-32024e772b45-pvc-82232042-4618-11e9-915b-32024e772b45") pod "8225739b-4618-11e9-915b-32024e772b45" (UID: "8225739b-4618-11e9-915b-32024e772b45")
Mar 14 05:19:27 minikube kubelet[2726]: I0314 05:19:27.382274    2726 reconciler.go:301] Volume detached for volume "pvc-82232042-4618-11e9-915b-32024e772b45" (UniqueName: "kubernetes.io/host-path/8225739b-4618-11e9-915b-32024e772b45-pvc-82232042-4618-11e9-915b-32024e772b45") on node "minikube" DevicePath ""
Mar 14 05:19:27 minikube kubelet[2726]: I0314 05:19:27.401617    2726 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8225739b-4618-11e9-915b-32024e772b45-default-token-jc92s" (OuterVolumeSpecName: "default-token-jc92s") pod "8225739b-4618-11e9-915b-32024e772b45" (UID: "8225739b-4618-11e9-915b-32024e772b45"). InnerVolumeSpecName "default-token-jc92s". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 14 05:19:27 minikube kubelet[2726]: I0314 05:19:27.482696    2726 reconciler.go:301] Volume detached for volume "default-token-jc92s" (UniqueName: "kubernetes.io/secret/8225739b-4618-11e9-915b-32024e772b45-default-token-jc92s") on node "minikube" DevicePath ""
Mar 14 05:19:41 minikube kubelet[2726]: I0314 05:19:41.183454    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-82232042-4618-11e9-915b-32024e772b45" (UniqueName: "kubernetes.io/host-path/c5065b73-4618-11e9-915b-32024e772b45-pvc-82232042-4618-11e9-915b-32024e772b45") pod "database-dc89c578-p2mwg" (UID: "c5065b73-4618-11e9-915b-32024e772b45")
Mar 14 05:19:41 minikube kubelet[2726]: I0314 05:19:41.183553    2726 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jc92s" (UniqueName: "kubernetes.io/secret/c5065b73-4618-11e9-915b-32024e772b45-default-token-jc92s") pod "database-dc89c578-p2mwg" (UID: "c5065b73-4618-11e9-915b-32024e772b45")
Mar 14 05:19:41 minikube kubelet[2726]: W0314 05:19:41.304789    2726 container.go:422] Failed to get RecentStats("/system.slice/run-r66d625fe412a481b98fbc8aff4c5c1f8.scope") while determining the next housekeeping: unable to find data in memory cache

I used to be able to use redis, mysql, and postgresql helm charts just fine. So first I suspect it was something to do with my minikube config, but further read through forums and reinstalling, seems like the minikube config just fine. I still having hard time to figure out the root cause, more help appreciated. Thank you.

@wkexperimental
Copy link
Author

Update:
Still trying to figure out this. I reinstalled whole thing, including OS(macOS mojave 10.14.3). Still same error "pod has unbound immediate PersistentVolumeClaims" when using helm postgresql chart.

steps:

  1. minikube start
😄  minikube v0.35.0 on darwin (amd64)
🔥  Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶  "minikube" IP address is 192.168.64.4
🐳  Configuring Docker as the container runtime ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.13.4 ...
🚀  Launching Kubernetes v1.13.4 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns
🔑  Configuring cluster permissions ...
🤔  Verifying component health .....
💗  kubectl is now configured to use "minikube"
🏄  Done! Thank you for using minikube!
  1. helm init
  2. helm install stable/postgresql

minikube logs:

==> coredns <==
.:53
2019-03-15T13:25:55.099Z [INFO] CoreDNS-1.2.6
2019-03-15T13:25:55.099Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
E0315 13:26:20.098406       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0315 13:26:20.101313       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0315 13:26:20.101325       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
==> kube-apiserver <==
I0315 13:25:39.597094       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0315 13:25:39.635114       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0315 13:25:39.689727       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0315 13:25:39.720972       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0315 13:25:39.761025       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0315 13:25:39.798187       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0315 13:25:39.840888       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0315 13:25:39.879788       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0315 13:25:39.916951       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0315 13:25:39.957450       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0315 13:25:39.996741       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0315 13:25:40.041292       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0315 13:25:40.076515       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0315 13:25:40.115823       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0315 13:25:40.154677       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0315 13:25:40.195208       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0315 13:25:40.239353       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0315 13:25:40.278453       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0315 13:25:40.315284       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0315 13:25:40.356643       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0315 13:25:40.397265       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0315 13:25:40.435348       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0315 13:25:40.478841       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0315 13:25:40.516438       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0315 13:25:40.557482       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0315 13:25:40.597866       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0315 13:25:40.633879       1 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0315 13:25:40.638779       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0315 13:25:40.677311       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0315 13:25:40.717801       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0315 13:25:40.757360       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0315 13:25:40.797898       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0315 13:25:40.837718       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0315 13:25:40.881839       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0315 13:25:40.914634       1 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0315 13:25:40.919658       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0315 13:25:40.957241       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0315 13:25:40.995834       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0315 13:25:41.036482       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0315 13:25:41.076240       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0315 13:25:41.117469       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0315 13:25:42.139658       1 controller.go:608] quota admission added evaluator for: serviceaccounts
I0315 13:25:42.616304       1 controller.go:608] quota admission added evaluator for: deployments.apps
I0315 13:25:42.660086       1 controller.go:608] quota admission added evaluator for: daemonsets.apps
I0315 13:25:45.860215       1 controller.go:608] quota admission added evaluator for: namespaces
I0315 13:25:48.498007       1 controller.go:608] quota admission added evaluator for: replicasets.apps
I0315 13:25:48.654928       1 controller.go:608] quota admission added evaluator for: controllerrevisions.apps
I0315 13:28:41.013066       1 controller.go:608] quota admission added evaluator for: deployments.extensions
I0315 13:29:16.277323       1 controller.go:608] quota admission added evaluator for: statefulsets.apps
E0315 13:29:16.546167       1 upgradeaware.go:343] Error proxying data from client to backend: read tcp 192.168.64.4:8443->192.168.64.1:50707: read: connection reset by peer
==> kube-scheduler <==
E0315 13:25:36.343516       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 13:25:36.345135       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0315 13:25:36.347764       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 13:25:36.348109       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0315 13:25:37.302555       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0315 13:25:37.306854       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0315 13:25:37.326614       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0315 13:25:37.327744       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0315 13:25:37.328832       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0315 13:25:37.340301       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0315 13:25:37.345230       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 13:25:37.346213       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0315 13:25:37.349253       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 13:25:37.350129       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0315 13:25:38.305635       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0315 13:25:38.308328       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0315 13:25:38.328345       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0315 13:25:38.330696       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0315 13:25:38.330774       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0315 13:25:38.341611       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0315 13:25:38.346809       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 13:25:38.347890       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0315 13:25:38.351140       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 13:25:38.352229       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0315 13:25:39.307152       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0315 13:25:39.309651       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0315 13:25:39.329438       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0315 13:25:39.331871       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0315 13:25:39.333020       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0315 13:25:39.342559       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0315 13:25:39.347653       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 13:25:39.348837       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0315 13:25:39.352359       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 13:25:39.353555       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0315 13:25:41.140974       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0315 13:25:41.242287       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0315 13:25:41.243018       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
I0315 13:25:41.252303       1 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
E0315 13:29:16.318584       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.338382       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.339011       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.351495       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.352159       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.364330       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.397530       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.406779       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.419514       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.426874       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.438126       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.460652       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
==> kubelet <==
-- Logs begin at Fri 2019-03-15 13:23:38 UTC, end at Fri 2019-03-15 13:29:34 UTC. --
Mar 15 13:25:32 minikube kubelet[2655]: E0315 13:25:32.569744    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:32 minikube kubelet[2655]: E0315 13:25:32.670279    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:32 minikube kubelet[2655]: E0315 13:25:32.772460    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:32 minikube kubelet[2655]: E0315 13:25:32.856269    2655 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158c24b1d57498f8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1b07cf6c6776f8, ext:232987527, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1b07cf7a205afc, ext:463208328, loc:(*time.Location)(0x71d6440)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 15 13:25:32 minikube kubelet[2655]: E0315 13:25:32.873425    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:32 minikube kubelet[2655]: E0315 13:25:32.974520    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.074846    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.175322    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.255425    2655 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158c24b1d57498f8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1b07cf6c6776f8, ext:232987527, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1b07cf7a6f761b, ext:468392618, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.276148    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.376803    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.476961    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.577455    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.656944    2655 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158c24b1d5746bfa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1b07cf6c6749fa, ext:232976009, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1b07cf7a6ec36d, ext:468346870, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.677745    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.778125    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.878694    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:33 minikube kubelet[2655]: E0315 13:25:33.979133    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:34 minikube kubelet[2655]: E0315 13:25:34.058315    2655 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158c24b1d57487e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1b07cf6c6765e5, ext:232983150, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1b07cf7a6ed59a, ext:468351523, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 15 13:25:34 minikube kubelet[2655]: E0315 13:25:34.079539    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:34 minikube kubelet[2655]: E0315 13:25:34.180267    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:34 minikube kubelet[2655]: E0315 13:25:34.280771    2655 kubelet.go:2266] node "minikube" not found
Mar 15 13:25:34 minikube kubelet[2655]: I0315 13:25:34.349238    2655 kubelet_node_status.go:75] Successfully registered node minikube
Mar 15 13:25:48 minikube kubelet[2655]: I0315 13:25:48.874440    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d8a99c8b-4725-11e9-bd7e-92d8e3ad2fc2-config-volume") pod "coredns-86c58d9df4-p5q46" (UID: "d8a99c8b-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:48 minikube kubelet[2655]: I0315 13:25:48.874557    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/d8b7a42a-4725-11e9-bd7e-92d8e3ad2fc2-xtables-lock") pod "kube-proxy-vq2q2" (UID: "d8b7a42a-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:48 minikube kubelet[2655]: I0315 13:25:48.874618    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ql7s" (UniqueName: "kubernetes.io/secret/d8a99c8b-4725-11e9-bd7e-92d8e3ad2fc2-coredns-token-7ql7s") pod "coredns-86c58d9df4-p5q46" (UID: "d8a99c8b-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:48 minikube kubelet[2655]: I0315 13:25:48.874654    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-dlhgf" (UniqueName: "kubernetes.io/secret/d8b7a42a-4725-11e9-bd7e-92d8e3ad2fc2-kube-proxy-token-dlhgf") pod "kube-proxy-vq2q2" (UID: "d8b7a42a-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:48 minikube kubelet[2655]: I0315 13:25:48.874683    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/d8b7a42a-4725-11e9-bd7e-92d8e3ad2fc2-lib-modules") pod "kube-proxy-vq2q2" (UID: "d8b7a42a-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:48 minikube kubelet[2655]: I0315 13:25:48.874715    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d8a72f3a-4725-11e9-bd7e-92d8e3ad2fc2-config-volume") pod "coredns-86c58d9df4-fpr9s" (UID: "d8a72f3a-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:48 minikube kubelet[2655]: I0315 13:25:48.874754    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d8b7a42a-4725-11e9-bd7e-92d8e3ad2fc2-kube-proxy") pod "kube-proxy-vq2q2" (UID: "d8b7a42a-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:48 minikube kubelet[2655]: I0315 13:25:48.874783    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ql7s" (UniqueName: "kubernetes.io/secret/d8a72f3a-4725-11e9-bd7e-92d8e3ad2fc2-coredns-token-7ql7s") pod "coredns-86c58d9df4-fpr9s" (UID: "d8a72f3a-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:49 minikube kubelet[2655]: W0315 13:25:49.000308    2655 container.go:422] Failed to get RecentStats("/system.slice/run-r8012b9241c68440fa9d9c30cb4552d5d.scope") while determining the next housekeeping: unable to find data in memory cache
Mar 15 13:25:49 minikube kubelet[2655]: W0315 13:25:49.048876    2655 container.go:409] Failed to create summary reader for "/system.slice/run-ra2876e34bee8440fb56f46ae7ae6f757.scope": none of the resources are being tracked.
Mar 15 13:25:49 minikube kubelet[2655]: W0315 13:25:49.711742    2655 pod_container_deletor.go:75] Container "72c7a2cc9184a04dec467fabc71f48df318108ae45a03e0713ebb7ac5a7b0a5a" not found in pod's containers
Mar 15 13:25:50 minikube kubelet[2655]: W0315 13:25:50.183754    2655 pod_container_deletor.go:75] Container "a5aca1c961efa802544a1c9c98acd5e6ac488386a75d1333b71cb9da458794ac" not found in pod's containers
Mar 15 13:25:50 minikube kubelet[2655]: I0315 13:25:50.591290    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-lm6c5" (UniqueName: "kubernetes.io/secret/d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2-default-token-lm6c5") pod "kubernetes-dashboard-ccc79bfc9-fvl8q" (UID: "d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:50 minikube kubelet[2655]: I0315 13:25:50.691628    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-xnmtz" (UniqueName: "kubernetes.io/secret/d9de1d5d-4725-11e9-bd7e-92d8e3ad2fc2-storage-provisioner-token-xnmtz") pod "storage-provisioner" (UID: "d9de1d5d-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:50 minikube kubelet[2655]: I0315 13:25:50.691822    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/d9de1d5d-4725-11e9-bd7e-92d8e3ad2fc2-tmp") pod "storage-provisioner" (UID: "d9de1d5d-4725-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:25:51 minikube kubelet[2655]: W0315 13:25:51.235875    2655 pod_container_deletor.go:75] Container "0ec677ed1ded9e85bf49d9866f725ce9ffd44a6088863a04e0d1a84b0e03d300" not found in pod's containers
Mar 15 13:26:22 minikube kubelet[2655]: E0315 13:26:22.652428    2655 pod_workers.go:190] Error syncing pod d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2 ("kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"
Mar 15 13:26:23 minikube kubelet[2655]: E0315 13:26:23.681664    2655 pod_workers.go:190] Error syncing pod d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2 ("kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"
Mar 15 13:26:30 minikube kubelet[2655]: E0315 13:26:30.488717    2655 pod_workers.go:190] Error syncing pod d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2 ("kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"
Mar 15 13:26:46 minikube kubelet[2655]: E0315 13:26:46.956706    2655 pod_workers.go:190] Error syncing pod d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2 ("kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"
Mar 15 13:26:50 minikube kubelet[2655]: E0315 13:26:50.488242    2655 pod_workers.go:190] Error syncing pod d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2 ("kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"
Mar 15 13:27:01 minikube kubelet[2655]: E0315 13:27:01.661627    2655 pod_workers.go:190] Error syncing pod d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2 ("kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-fvl8q_kube-system(d9cb2dff-4725-11e9-bd7e-92d8e3ad2fc2)"
Mar 15 13:28:41 minikube kubelet[2655]: I0315 13:28:41.156197    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-lm6c5" (UniqueName: "kubernetes.io/secret/3f73e60d-4726-11e9-bd7e-92d8e3ad2fc2-default-token-lm6c5") pod "tiller-deploy-6d6cc8dcb5-drhdp" (UID: "3f73e60d-4726-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:29:16 minikube kubelet[2655]: I0315 13:29:16.588253    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-6w9cf" (UniqueName: "kubernetes.io/secret/547a1ff7-4726-11e9-bd7e-92d8e3ad2fc2-default-token-6w9cf") pod "happy-penguin-postgresql-0" (UID: "547a1ff7-4726-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:29:16 minikube kubelet[2655]: I0315 13:29:16.588337    2655 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-5478d518-4726-11e9-bd7e-92d8e3ad2fc2" (UniqueName: "kubernetes.io/host-path/547a1ff7-4726-11e9-bd7e-92d8e3ad2fc2-pvc-5478d518-4726-11e9-bd7e-92d8e3ad2fc2") pod "happy-penguin-postgresql-0" (UID: "547a1ff7-4726-11e9-bd7e-92d8e3ad2fc2")
Mar 15 13:29:16 minikube kubelet[2655]: W0315 13:29:16.704154    2655 container.go:422] Failed to get RecentStats("/system.slice/run-re2715426a0de4bdd9683b53a727da2f0.scope") while determining the next housekeeping: unable to find data in memory cache
Mar 15 13:29:16 minikube kubelet[2655]: W0315 13:29:16.820713    2655 pod_container_deletor.go:75] Container "f50633c4aa9a68d27f8baf3e5de110f4a0e095755140a94f2e08f2aa566ec2be" not found in pod's containers

I'm not sure why I got some of these error from minikube logs:
coredns:

E0315 13:26:20.098406       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0315 13:26:20.101313       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0315 13:26:20.101325       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

kube-scheduler:

E0315 13:25:36.343516       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 13:25:36.345135       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0315 13:25:36.347764       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 13:25:36.348109       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0315 13:25:37.302555       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0315 13:25:37.306854       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0315 13:25:37.326614       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0315 13:25:37.327744       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0315 13:25:37.328832       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0315 13:25:37.340301       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0315 13:25:37.345230       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 13:25:37.346213       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0315 13:25:37.349253       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 13:25:37.350129       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0315 13:25:38.305635       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0315 13:25:38.308328       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0315 13:25:38.328345       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0315 13:25:38.330696       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0315 13:25:38.330774       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0315 13:25:38.341611       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0315 13:25:38.346809       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 13:25:38.347890       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0315 13:25:38.351140       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 13:25:38.352229       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0315 13:25:39.307152       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0315 13:25:39.309651       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0315 13:25:39.329438       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0315 13:25:39.331871       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0315 13:25:39.333020       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0315 13:25:39.342559       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0315 13:25:39.347653       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 13:25:39.348837       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0315 13:25:39.352359       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 13:25:39.353555       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0315 13:25:41.140974       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0315 13:25:41.242287       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0315 13:25:41.243018       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
I0315 13:25:41.252303       1 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
E0315 13:29:16.318584       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.338382       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.339011       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.351495       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.352159       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.364330       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.397530       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.406779       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.419514       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.426874       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
E0315 13:29:16.438126       1 factory.go:1519] Error scheduling default/happy-penguin-postgresql-0: pod has unbound immediate PersistentVolumeClaims; retrying
E0315 13:29:16.460652       1 scheduler.go:546] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims

I've been trying to find solution for this from things I can find from internet, still not able to make it work until now.

@wllmtrng
Copy link

+1, same issue on mac os

@wllmtrng
Copy link

Steps to reproduce:

  1. Install a helm chart which has the pod bind to a persistent volume with ReadWriteOnce access to a minikube cluster. In my case it was the confluent kafka distribution.
  2. Shutdown minikube
  3. Start minikube.
  4. Observe that the helm charts report the error "pod has unbound immediate PersistentVolumeClaims"

Workaround:
Modify underlying yaml files to use a persistent volume with ReadWriteMany access

@rosscdh
Copy link

rosscdh commented May 4, 2019

+1 jenkinsx install after prev install deleted

@rosscdh
Copy link

rosscdh commented May 7, 2019

seems the trick is just to wait..... it goes away... eventually.. dependencies

@wkexperimental
Copy link
Author

The issue resolved itself, I don't know what changed. But now it worked as expected, no such error anymore.

@hutchic
Copy link

hutchic commented May 9, 2019

I can reproduce this when using the postgres helm chart version: 3.18.3. The problem does not show up in version: 3.9.1 however

@tobiasjpark
Copy link

Still having this issue while trying to deploy MinIO. Changing the YAML to say ReadWriteMany instead of ReadWriteOnce does not fix the problem.

pranav-patil referenced this issue in pranav-patil/spring-kubernetes-microservices Sep 2, 2019
@get2arun
Copy link

get2arun commented Jan 6, 2020

I see this error in minishift, changing from ReadWriteOnce to ReadWriteMany fixed the problem for me.

@mustela
Copy link

mustela commented Jan 6, 2020

Im still having the same issue as #3869 (comment), changing access mode to ReadWriteMany didn't fix the issue.

@theAkito
Copy link

theAkito commented Feb 5, 2020

Can this issue be re-opened or is there a need for a new issue?

@Toady00
Copy link

Toady00 commented Mar 31, 2020

I have previously run the elasticsearch operator quickstart on minikube without issue. I upgraded minikube and deleted my old hyperkit based minikube server. Running through ES quickstart gave me this exact same error. I've since changed the version of kubernetes in minikube via minikube config set kubernetes-version 1.16.0 and created a new server. Quickstart works fine with that version of kube. There's something going on with 1.18 and minikube.

@the-nw1-group
Copy link

I'm encountering the same issue with redis-ha from helm/charts, with minikube 1.9.0, and kubectl version v1.18.0.

Do I need to raise a new issue here for this?

@yaron-idan
Copy link

Getting the same error when trying to install rabbitmq from bitnami's chart. Using minikube 1.9.2 and kubectl 1.18.0.

@FrancoisZhang
Copy link

same issue with minikube 1.9.2 and latest elasticsearch operator

@kkmathigir
Copy link

I have previously run the elasticsearch operator quickstart on minikube without issue. I upgraded minikube and deleted my old hyperkit based minikube server. Running through ES quickstart gave me this exact same error. I've since changed the version of kubernetes in minikube via minikube config set kubernetes-version 1.16.0 and created a new server. Quickstart works fine with that version of kube. There's something going on with 1.18 and minikube.

I hit the exact same problem. And, am a first timer to Minikube and ECK. Your post helped me to revert back to older kubernetes version and then the ECK/elasticsearch cluster got running. Thanks a lot for posting.

@pchmielecki87
Copy link

any update on it?

@svanschalkwyk
Copy link

Same here.

running "VolumeBinding" filter plugin for pod "solr-0": pod has unbound immediate PersistentVolumeClaims

incubator/solr

@ningyougang
Copy link

Can you execute kubectl get storageclasses.storage.k8s.io to check whether exist storageclass on your k8s cluster?

@declark1
Copy link

I'm having the same issue attempting to create a dynamic PVC in Argo Workflows on minikube (Kubernetes 1.18.x). Reverting to Kubernetes 1.16 solved the problem, so there seems to be an issue with 1.18 on minikube.

@jwandrews
Copy link

My $0.02:
After some digging, I found that the storage-provisioner addon was failing to create it's pod for some reason. Even though minikube addons list showed that it was enabled.

After kicking the addon: minikube addons enable storage-provisioner, it spun up and immediately created the PVs and everything started ticking like a well-oiled machine.

Not guaranteed to work for everyone. But worked for me.

@pythonebasta
Copy link

changing from ReadWriteOnce to ReadWriteMany fixed the problem for me.

@ImNM
Copy link

ImNM commented Jul 27, 2021

Hi guys I run elasticsearch helm in m1 mac docker desktop kubernetes

I fix this promblem ... 제 상황설명좀 할게요
first , i got row heap memory so elasticsearch cluster went down ... i need to cover it.. so
helm uninstall elasticsearch ... (i think this is not good way)
anyway... helm uninstall did not delete PVC so my pvc volume mount keep remaining.
And re install it i got "User system:node:docker-desktop cannot get resource persistentvolumeclaims in API group"
error.. so i think there is any policy? or security? like adding fluentd,prometheous RBAC cluster role

SO ,, there is value.yaml file like https://github.com/elastic/helm-charts/blob/master/elasticsearch/values.yaml
i found this

podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir

and i fix create to true
and deploy it with my value.yaml file
helm install elasticsearch elastic/elasticsearch -f elasticsearch-config.yaml this command

Finally, i can successfully reach my own data (remaining PVC volume) and get back my kibana dashboard..

So i don't know this is right way but try it if you when uninstall helm ( didn't delete PVC volume) and re install same thing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/storage storage bugs kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests