Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

persistentVolume: Using containers[i].volumeMounts[j].subPath produces "no such file or directory" error #4634

Open
docktermj opened this issue Jun 28, 2019 · 26 comments
Labels
addon/storage-provisioner Issues relating to storage provisioner addon area/mount help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@docktermj
Copy link

docktermj commented Jun 28, 2019

Description

Using containers[i].volumeMounts[j].subPath produces no such file or directory errors.

$ kubectl describe pods -n my-namespace my-job-bad-6962x

:
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  6m4s                   default-scheduler  Successfully assigned my-namespace/my-job-bad-6962x to minikube
  Normal   Pulled     3m27s (x8 over 4m48s)  kubelet, minikube  Successfully pulled image "docker.io/centos:latest"
  Warning  Failed     3m27s (x8 over 4m48s)  kubelet, minikube  Error: stat /opt/my-path/: no such file or directory

Oddly, when a pod that is run without subPath, not only does it work, but it also initializes something that allows a Pod with subPath to work. There's an initialization problem in 'minikube' somewhere.

Steps to reproduce the issue:

  1. Start MiniKube
$ minikube start --cpus 4 --memory 8192 --vm-driver kvm2

😄  minikube v1.2.0 on linux (amd64)
🔥  Creating kvm2 VM (CPUs=4, Memory=8192MB, Disk=20000MB) ...
🐳  Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
⌛  Verifying: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"
  1. Create my-namespace.yaml file:
cat <<EOT > my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace
  labels:
    name: my-namespace
EOT
  1. Create namespace:
kubectl create -f my-namespace.yaml
  1. Create my-persistent-volume.yaml file:
cat <<EOT > my-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-persistent-volume
  labels:
    type: local
  namespace: my-namespace
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/opt/my-path/"
EOT
  1. Create persistent volume:
kubectl create -f my-persistent-volume.yaml
  1. Create my-persistent-volume-claim.yaml file:
cat <<EOT > my-persistent-volume-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    cattle.io/creator: norman
  name: my-persistent-volume-claim
  namespace: my-namespace
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: "manual"
  volumeName: my-persistent-volume
EOT
  1. Create persistent volume claim:
kubectl create -f my-persistent-volume-claim.yaml
  1. Create my-job-bad.yaml file:
cat <<EOT > my-job-bad.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: my-job-bad
  namespace: my-namespace
spec:
  template:
    spec:
      containers:
        - name: subpath-test
          image: docker.io/centos:latest
          imagePullPolicy: Always
          command: ["sleep"]
          args: ["infinity"]
          volumeMounts:
            - name: my-volume
              mountPath: /opt/my-subpath
              subPath: my-subpath-1    
      restartPolicy: Never
      volumes:
        - name: my-volume
          persistentVolumeClaim:
            claimName: my-persistent-volume-claim
EOT
  1. Create job with subPath that fails:
kubectl create -f my-job-bad.yaml
  1. Watch for error.
$ kubectl get pods --namespace my-namespace --watch

NAME               READY   STATUS              RESTARTS   AGE
my-job-bad-6962x   0/1     ContainerCreating   0          11s
my-job-bad-6962x   0/1     CreateContainerConfigError   0          77s
  1. Create my-job-good.yaml file:
cat <<EOT > my-job-good.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: my-job-good
  namespace: my-namespace
spec:
  template:
    spec:
      containers:
        - name: subpath-test
          image: docker.io/centos:latest
          imagePullPolicy: Always
          command: ["sleep"]
          args: ["infinity"]
          volumeMounts:
            - name: my-volume
              mountPath: /opt/my-subpath
      restartPolicy: Never
      volumes:
        - name: my-volume
          persistentVolumeClaim:
            claimName: my-persistent-volume-claim
EOT
  1. Create job without subPath that succeeds:
kubectl create -f my-job-good.yaml
  1. Now both the "good job" and the "bad job" work.
$ kubectl get pods --namespace my-namespace --watch

NAME                READY   STATUS    RESTARTS   AGE
my-job-bad-6962x    1/1     Running   0          4m25s
my-job-good-dlfxn   1/1     Running   0          12s

Describe the results you received:

  1. View error in my-job-bad.
$ kubectl describe pods -n my-namespace my-job-bad-6962x
Name:               my-job-bad-6962x
Namespace:          my-namespace
Priority:           0
PriorityClassName:  <none>
Node:               minikube/192.168.122.59
Start Time:         Fri, 28 Jun 2019 16:36:29 -0400
Labels:             controller-uid=cdc5f87b-9c59-4f58-94d5-d286f7597d65
                    job-name=my-job-bad
Annotations:        <none>
Status:             Running
IP:                 172.17.0.4
Controlled By:      Job/my-job-bad
Containers:
  subpath-test:
    Container ID:  docker://965ad24defc7d2364982d9c7c5e8a5efa9293578be3b7cb7ef80cfe6e8ab3128
    Image:         docker.io/centos:latest
    Image ID:      docker-pullable://centos@sha256:b5e66c4651870a1ad435cd75922fe2cb943c9e973a9673822d1414824a1d0475
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
    Args:
      infinity
    State:          Running
      Started:      Fri, 28 Jun 2019 16:40:48 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/my-subpath from my-volume (rw,path="my-subpath-1")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wrmc5 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  my-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  my-persistent-volume-claim
    ReadOnly:   false
  default-token-wrmc5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wrmc5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  6m4s                   default-scheduler  Successfully assigned my-namespace/my-job-bad-6962x to minikube
  Normal   Pulled     3m27s (x8 over 4m48s)  kubelet, minikube  Successfully pulled image "docker.io/centos:latest"
  Warning  Failed     3m27s (x8 over 4m48s)  kubelet, minikube  Error: stat /opt/my-path/: no such file or directory
  Normal   Pulling    3m16s (x9 over 6m3s)   kubelet, minikube  Pulling image "docker.io/centos:latest"

Describe the results you expected:

The Pod containing containers[i].volumeMounts[j].subPath should come up without
the necessity of a Pod without subPath initializing "something".

Additional information you deem important (e.g. issue happens only occasionally):

As seen, when running without subPath, the Pod comes up properly.
My guess is that when subPath is used, an initialization step is missing.

Version of Kubernetes:

  • Output of kubectl version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • Output of minikube:
$ minikube version
minikube version: v1.2.0

Cleanup

kubectl delete -f my-job-good.yaml
kubectl delete -f my-job-bad.yaml
kubectl delete -f my-persistent-volume-claim.yaml
kubectl delete -f my-persistent-volume.yaml
kubectl delete -f my-namespace.yaml
minikube stop
minikube delete

The output of the minikube logs command:

$ minikube logs
==> coredns <==
.:53
2019-06-28T20:31:33.762Z [INFO] CoreDNS-1.3.1
2019-06-28T20:31:33.762Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-06-28T20:31:33.762Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843

==> dmesg <==
[Jun28 20:29] APIC calibration not consistent with PM-Timer: 106ms instead of 100ms
[  +0.000000] core: CPUID marked event: 'bus cycles' unavailable
[  +0.001021]  #2
[  +0.001080]  #3
[  +0.022772] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[  +0.118421] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ +21.645559] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
[  +0.025752] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[  +0.025602] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[  +0.230451] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.047506] systemd-fstab-generator[1109]: Ignoring "noauto" for root device
[  +0.006948] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000004] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.614842] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +0.769835] vboxguest: loading out-of-tree module taints kernel.
[  +0.003883] vboxguest: PCI device not found, probably running on physical hardware.
[  +7.041667] systemd-fstab-generator[1986]: Ignoring "noauto" for root device
[Jun28 20:30] systemd-fstab-generator[2741]: Ignoring "noauto" for root device
[  +9.291779] systemd-fstab-generator[2990]: Ignoring "noauto" for root device
[Jun28 20:31] kauditd_printk_skb: 68 callbacks suppressed
[ +13.855634] tee (3708): /proc/3426/oom_adj is deprecated, please use /proc/3426/oom_score_adj instead.
[  +7.361978] kauditd_printk_skb: 20 callbacks suppressed
[  +6.562157] kauditd_printk_skb: 47 callbacks suppressed
[  +3.921583] NFSD: Unable to end grace period: -110

==> kernel <==
 21:02:00 up 32 min,  0 users,  load average: 0.33, 0.35, 0.34
Linux minikube 4.15.0 #1 SMP Sun Jun 23 23:02:01 PDT 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:54:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:55:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:55:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:56:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:56:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:57:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:57:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:58:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:58:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:59:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
error: no objects passed to apply
error: no objects passed to apply
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:59:33+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T21:00:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T21:00:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T21:01:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T21:01:32+00:00 ==

==> kube-apiserver <==
I0628 20:31:16.446365       1 client.go:354] scheme "" not registered, fallback to default scheme
I0628 20:31:16.446451       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0628 20:31:16.446513       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0628 20:31:16.463319       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0628 20:31:16.464540       1 client.go:354] parsed scheme: ""
I0628 20:31:16.464624       1 client.go:354] scheme "" not registered, fallback to default scheme
I0628 20:31:16.464699       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0628 20:31:16.464799       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0628 20:31:16.479368       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0628 20:31:19.006900       1 secure_serving.go:116] Serving securely on [::]:8443
I0628 20:31:19.007039       1 available_controller.go:374] Starting AvailableConditionController
I0628 20:31:19.007127       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0628 20:31:19.007616       1 crd_finalizer.go:255] Starting CRDFinalizer
I0628 20:31:19.007724       1 autoregister_controller.go:140] Starting autoregister controller
I0628 20:31:19.007821       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0628 20:31:19.007995       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0628 20:31:19.008035       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0628 20:31:19.008720       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0628 20:31:19.008768       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0628 20:31:19.009739       1 controller.go:83] Starting OpenAPI controller
I0628 20:31:19.009854       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0628 20:31:19.009924       1 naming_controller.go:288] Starting NamingConditionController
I0628 20:31:19.010007       1 establishing_controller.go:73] Starting EstablishingController
I0628 20:31:19.010074       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
E0628 20:31:19.010957       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.223, ResourceVersion: 0, AdditionalErrorMsg: 
I0628 20:31:19.011694       1 controller.go:81] Starting OpenAPI AggregationController
I0628 20:31:19.116245       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0628 20:31:19.116318       1 cache.go:39] Caches are synced for autoregister controller
I0628 20:31:19.207447       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0628 20:31:19.208523       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0628 20:31:20.004685       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0628 20:31:20.004761       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0628 20:31:20.004891       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0628 20:31:20.021255       1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0628 20:31:20.027177       1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0628 20:31:20.027214       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0628 20:31:21.789899       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0628 20:31:22.069843       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0628 20:31:22.085456       1 controller.go:606] quota admission added evaluator for: endpoints
W0628 20:31:22.375007       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.39.223]
I0628 20:31:22.428889       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0628 20:31:22.872788       1 controller.go:606] quota admission added evaluator for: namespaces
I0628 20:31:23.516372       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0628 20:31:23.843781       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0628 20:31:24.145272       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0628 20:31:30.267937       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0628 20:31:30.368017       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0628 20:36:29.652258       1 controller.go:606] quota admission added evaluator for: jobs.batch
E0628 20:46:26.034713       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0628 20:59:47.138291       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-proxy <==
W0628 20:31:31.469387       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0628 20:31:31.486180       1 server_others.go:143] Using iptables Proxier.
W0628 20:31:31.486736       1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0628 20:31:31.487461       1 server.go:534] Version: v1.15.0
I0628 20:31:31.506151       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0628 20:31:31.506240       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0628 20:31:31.506405       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0628 20:31:31.506668       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0628 20:31:31.507332       1 config.go:96] Starting endpoints config controller
I0628 20:31:31.507381       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0628 20:31:31.507608       1 config.go:187] Starting service config controller
I0628 20:31:31.507653       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0628 20:31:31.607874       1 controller_utils.go:1036] Caches are synced for service config controller
I0628 20:31:31.607905       1 controller_utils.go:1036] Caches are synced for endpoints config controller

==> kube-scheduler <==
I0628 20:31:13.084899       1 serving.go:319] Generated self-signed cert in-memory
W0628 20:31:14.157003       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0628 20:31:14.157134       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0628 20:31:14.157255       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0628 20:31:14.162595       1 server.go:142] Version: v1.15.0
I0628 20:31:14.162743       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0628 20:31:14.164519       1 authorization.go:47] Authorization is disabled
W0628 20:31:14.164559       1 authentication.go:55] Authentication is disabled
I0628 20:31:14.164685       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0628 20:31:14.170555       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0628 20:31:19.135466       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0628 20:31:19.195942       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0628 20:31:19.196169       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0628 20:31:19.196451       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0628 20:31:19.196823       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0628 20:31:19.203454       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0628 20:31:19.203884       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0628 20:31:19.206488       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0628 20:31:19.206807       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0628 20:31:19.206488       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0628 20:31:20.141756       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0628 20:31:20.198193       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0628 20:31:20.199220       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0628 20:31:20.206555       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0628 20:31:20.207831       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0628 20:31:20.213491       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0628 20:31:20.213838       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0628 20:31:20.213983       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0628 20:31:20.217183       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0628 20:31:20.217406       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0628 20:31:22.077586       1 leaderelection.go:235] attempting to acquire leader lease  kube-system/kube-scheduler...
I0628 20:31:22.088791       1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
E0628 20:31:30.321683       1 factory.go:702] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Fri 2019-06-28 20:29:38 UTC, end at Fri 2019-06-28 21:02:00 UTC. --
Jun 28 20:31:20 minikube kubelet[3010]: E0628 20:31:20.146241    3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee402e59e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c11d99e, ext:3375456312, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a962c502, ext:3598863731, loc:(*time.Location)(0x781d740)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:20 minikube kubelet[3010]: E0628 20:31:20.547212    3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee4031da5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c1211a5, ext:3375470655, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9633094, ext:3598891777, loc:(*time.Location)(0x781d740)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:20 minikube kubelet[3010]: E0628 20:31:20.948190    3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee402603f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c11543f, ext:3375422170, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9c2709a, ext:3605134986, loc:(*time.Location)(0x781d740)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:21 minikube kubelet[3010]: E0628 20:31:21.347536    3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee402e59e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c11d99e, ext:3375456312, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9c39b54, ext:3605210674, loc:(*time.Location)(0x781d740)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:21 minikube kubelet[3010]: E0628 20:31:21.746377    3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee4031da5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c1211a5, ext:3375470655, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9c4094f, ext:3605238835, loc:(*time.Location)(0x781d740)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:22 minikube kubelet[3010]: E0628 20:31:22.147513    3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee402603f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c11543f, ext:3375422170, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9ea1830, ext:3607733600, loc:(*time.Location)(0x781d740)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:30 minikube kubelet[3010]: I0628 20:31:30.495279    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/3a4da883-17ca-4a5b-9324-ca24aee64a30-lib-modules") pod "kube-proxy-b2jpw" (UID: "3a4da883-17ca-4a5b-9324-ca24aee64a30")
Jun 28 20:31:30 minikube kubelet[3010]: I0628 20:31:30.495365    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/3a4da883-17ca-4a5b-9324-ca24aee64a30-kube-proxy") pod "kube-proxy-b2jpw" (UID: "3a4da883-17ca-4a5b-9324-ca24aee64a30")
Jun 28 20:31:30 minikube kubelet[3010]: I0628 20:31:30.495415    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-vng88" (UniqueName: "kubernetes.io/secret/3a4da883-17ca-4a5b-9324-ca24aee64a30-kube-proxy-token-vng88") pod "kube-proxy-b2jpw" (UID: "3a4da883-17ca-4a5b-9324-ca24aee64a30")
Jun 28 20:31:30 minikube kubelet[3010]: I0628 20:31:30.495522    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/3a4da883-17ca-4a5b-9324-ca24aee64a30-xtables-lock") pod "kube-proxy-b2jpw" (UID: "3a4da883-17ca-4a5b-9324-ca24aee64a30")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.301860    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3a56ab17-aad6-45f3-813c-dfb6d75ddd69-config-volume") pod "coredns-5c98db65d4-grhc2" (UID: "3a56ab17-aad6-45f3-813c-dfb6d75ddd69")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.303274    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-shjs8" (UniqueName: "kubernetes.io/secret/f849e8a2-462c-47d8-9cd8-86a9d0f2c5f8-coredns-token-shjs8") pod "coredns-5c98db65d4-z6jl7" (UID: "f849e8a2-462c-47d8-9cd8-86a9d0f2c5f8")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.303648    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-shjs8" (UniqueName: "kubernetes.io/secret/3a56ab17-aad6-45f3-813c-dfb6d75ddd69-coredns-token-shjs8") pod "coredns-5c98db65d4-grhc2" (UID: "3a56ab17-aad6-45f3-813c-dfb6d75ddd69")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.303998    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f849e8a2-462c-47d8-9cd8-86a9d0f2c5f8-config-volume") pod "coredns-5c98db65d4-z6jl7" (UID: "f849e8a2-462c-47d8-9cd8-86a9d0f2c5f8")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.605835    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-jrqgk" (UniqueName: "kubernetes.io/secret/0ea547b2-82bd-465c-b1d8-b020c49159c4-storage-provisioner-token-jrqgk") pod "storage-provisioner" (UID: "0ea547b2-82bd-465c-b1d8-b020c49159c4")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.606061    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/0ea547b2-82bd-465c-b1d8-b020c49159c4-tmp") pod "storage-provisioner" (UID: "0ea547b2-82bd-465c-b1d8-b020c49159c4")
Jun 28 20:31:33 minikube kubelet[3010]: W0628 20:31:33.368502    3010 pod_container_deletor.go:75] Container "b73805d4a687d75d991610ad1c2552102d9f42f00e2e5529cfdd550a947c9d20" not found in pod's containers
Jun 28 20:31:33 minikube kubelet[3010]: W0628 20:31:33.554251    3010 pod_container_deletor.go:75] Container "14ff5eb56f6192f01f6f271da07ea970bf3a775f64966addcd8808a2912fb2ed" not found in pod's containers
Jun 28 20:36:29 minikube kubelet[3010]: I0628 20:36:29.780191    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-wrmc5" (UniqueName: "kubernetes.io/secret/4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153-default-token-wrmc5") pod "my-job-bad-6962x" (UID: "4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153")
Jun 28 20:36:29 minikube kubelet[3010]: I0628 20:36:29.780413    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "my-persistent-volume" (UniqueName: "kubernetes.io/host-path/4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153-my-persistent-volume") pod "my-job-bad-6962x" (UID: "4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153")
Jun 28 20:37:45 minikube kubelet[3010]: E0628 20:37:45.180086    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:37:45 minikube kubelet[3010]: E0628 20:37:45.180313    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:37:47 minikube kubelet[3010]: E0628 20:37:47.346431    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:37:47 minikube kubelet[3010]: E0628 20:37:47.346623    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:02 minikube kubelet[3010]: E0628 20:38:02.425642    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:02 minikube kubelet[3010]: E0628 20:38:02.426377    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:16 minikube kubelet[3010]: E0628 20:38:16.459424    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:16 minikube kubelet[3010]: E0628 20:38:16.459540    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:28 minikube kubelet[3010]: E0628 20:38:28.447362    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:28 minikube kubelet[3010]: E0628 20:38:28.450044    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:40 minikube kubelet[3010]: E0628 20:38:40.454969    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:40 minikube kubelet[3010]: E0628 20:38:40.455046    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:55 minikube kubelet[3010]: E0628 20:38:55.027408    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:55 minikube kubelet[3010]: E0628 20:38:55.027544    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:39:06 minikube kubelet[3010]: E0628 20:39:06.467351    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:39:06 minikube kubelet[3010]: E0628 20:39:06.467523    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:39:18 minikube kubelet[3010]: E0628 20:39:18.451915    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:39:18 minikube kubelet[3010]: E0628 20:39:18.452059    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:39:32 minikube kubelet[3010]: E0628 20:39:32.446451    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:39:32 minikube kubelet[3010]: E0628 20:39:32.446592    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:39:48 minikube kubelet[3010]: E0628 20:39:48.462440    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:39:48 minikube kubelet[3010]: E0628 20:39:48.462514    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:40:01 minikube kubelet[3010]: E0628 20:40:01.735487    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:40:01 minikube kubelet[3010]: E0628 20:40:01.735637    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:40:17 minikube kubelet[3010]: E0628 20:40:17.444403    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:40:17 minikube kubelet[3010]: E0628 20:40:17.444538    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:40:33 minikube kubelet[3010]: E0628 20:40:33.440607    3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:40:33 minikube kubelet[3010]: E0628 20:40:33.441939    3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:40:42 minikube kubelet[3010]: I0628 20:40:42.538640    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-wrmc5" (UniqueName: "kubernetes.io/secret/ebc79fb2-80c0-4ed1-9e3e-fe74bde725c4-default-token-wrmc5") pod "my-job-good-dlfxn" (UID: "ebc79fb2-80c0-4ed1-9e3e-fe74bde725c4")
Jun 28 20:40:42 minikube kubelet[3010]: I0628 20:40:42.538805    3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "my-persistent-volume" (UniqueName: "kubernetes.io/host-path/ebc79fb2-80c0-4ed1-9e3e-fe74bde725c4-my-persistent-volume") pod "my-job-good-dlfxn" (UID: "ebc79fb2-80c0-4ed1-9e3e-fe74bde725c4")

==> storage-provisioner <==

The operating system version:

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.2 LTS
Release:	18.04
Codename:	bionic
docktermj added a commit to senzing-garage/charts that referenced this issue Jun 28, 2019
@brianmacy
Copy link

I am experiencing the same behavior. Any estimate on when the issue will be reviewed?

@tstromberg tstromberg added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 16, 2019
@tstromberg
Copy link
Contributor

I'm fairly ignorant about PVC's, but by what mechanism are you expecting /opt/my-path to be created by?

@tstromberg tstromberg added area/mount triage/needs-information Indicates an issue needs more information in order to work on it. labels Jul 16, 2019
@docktermj
Copy link
Author

@tstromberg My understanding of Persistent Volume Claims (PVCs) is that Kubernetes will inform Docker of the volume. Much like a docker run .... --volumes /opt/my-path/:/opt/my-subpath ...

Did I come close to understanding your question?

@docktermj
Copy link
Author

docktermj commented Jul 17, 2019

My guess is that minikube doesn't completely implement the subPath behavior, yet. The error:

Error: stat /opt/my-path/: no such file or directory

seems to indicate that when subPath is specified, minikube somehow forgets to create the PV /opt/my-path when the Job's volumeMounts is specified.

Without the subPath specified, /opt/my-path seems to be located just fine.

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed triage/needs-information Indicates an issue needs more information in order to work on it. labels Jul 17, 2019
@tstromberg
Copy link
Contributor

@docktermj - Is it possible that hostPath is only going to work if /opt/my-path exists within the minikube VM?

minikube ssh stat /opt/my-path probably needs to work first before it is used as a hostPath, but I could be entirely mistaken here. Do other paths like /tmp work?

If it helps: Docker inside of the guest VM does not speak with Docker on the host. There is minikube mount for setting up a 9p mount between the host and the guest VM, but it doesn't rely on Docker to do so.

My apologies here for lacking in knowledeg in this topic. Just trying to help anyways =)

@tstromberg tstromberg added triage/needs-information Indicates an issue needs more information in order to work on it. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jul 17, 2019
@docktermj
Copy link
Author

@tstromberg Other paths in /tmp work if subPath is not specified (i.e. the difference between my-job-bad.yaml and my-job-good.yaml above. That's what confuses me. If they both didn't work or both did work, I'd be less confused.

I'll look into the minikube mount thought. (Probably early next week)

Appreciate you helping. You ask questions that make me think. ...and that may be the way this gets solved.

@tstromberg
Copy link
Contributor

Any luck with this?

@docktermj
Copy link
Author

Nothing yet. Today I "upped" my minikube version to 1.3.1 and will try again early next week.

@docktermj
Copy link
Author

Same issue for 1.3.1:

$ minikube start --cpus 4 --memory 8192 --vm-driver kvm2
😄  minikube v1.3.1 on Ubuntu 18.04
⚠️  Error checking driver version: exit status 1
🔥  Creating kvm2 VM (CPUs=4, Memory=8192MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

So still an open issue.

@docktermj
Copy link
Author

As a work-around, I can do this:

minikube ssh

From the minikube prompt:

sudo mkdir /opt/my-path

Then the subPath key-values work.

I consider this a work-around because it's procedural, not declarative.

@docktermj
Copy link
Author

docktermj commented Aug 20, 2019

Well, here's another bad work-around. Add an initContainer YAML stanza to the my-job-bad.yaml file (described in the initial description) like this:

cat <<EOT > my-job-bad.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: my-job-bad
  namespace: my-namespace
spec:
  template:
    spec:
      initContainers:
        - name: pre-mount
          image: busybox:1.28
          volumeMounts:
            - name:  my-volume
              mountPath: /opt/my-subpath
      containers:
        - name: subpath-test
          image: docker.io/centos:latest
          imagePullPolicy: Always
          command: ["sleep"]
          args: ["infinity"]
          volumeMounts:
            - name: my-volume
              mountPath: /opt/my-subpath
              subPath: my-subpath-1    
      restartPolicy: Never
      volumes:
        - name: my-volume
          persistentVolumeClaim:
            claimName: my-persistent-volume-claim
EOT

@docktermj
Copy link
Author

@tstromberg Given the two "bad" work-arounds above, how do we request a fix in the minikube code?

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed triage/needs-information Indicates an issue needs more information in order to work on it. labels Aug 22, 2019
@tstromberg
Copy link
Contributor

@docktermj - consider this issue the request.

I'm still a little unclear on where this should be fixed. The only PVC code in minikube is this package:

https://github.com/kubernetes/minikube/blob/master/pkg/storage/storage_provisioner.go

It may be possible that simply rebuilding the storage-provisioner image since we moved off of the r2d4 storage-provisioner fork (#3628) might have an effect here, but it hasn't been tested. Anyways, help wanted!

@tstromberg tstromberg added the kind/bug Categorizes issue or PR as related to a bug. label Sep 20, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2019
@docktermj
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 20, 2019
@avoidik
Copy link
Contributor

avoidik commented Feb 9, 2020

wasn't it fixed already in #2256?

@sharifelgamal
Copy link
Collaborator

@docktermj Is this still an issue? Have you tried with any newer version of minikube?

@docktermj
Copy link
Author

@sharifelgamal

Since I put in my work-around, I haven't revisited the issue. I can certainly try a new version of minikube.

@isra17
Copy link

isra17 commented Apr 26, 2020

I do have a similar issue with Minikube on Arch:

$ minikube version
minikube version: v1.9.2
commit: 93af9c1e43cab9618e301bc9fa720c63d5efa393

It seems to work the first time I create the deployment, and then when I stop/start minikube again, the pod won't restart with the error Warning Failed 10m (x11 over 12m) kubelet, minikube Error: stat /tmp/hostpath-provisioner/pvc-d8a5a418-4dc5-4b87-9dd1-1340b597f215: no such file or directory. PVC also has a subPath.

If I delete the PVC and create it again, the pod is able to start successfully.

@medyagh
Copy link
Member

medyagh commented May 13, 2020

@isra17 do you mind sharing your full workflow example with yaml files so I could replicate this issue?

@isra17
Copy link

isra17 commented May 13, 2020

Fully reproducible steps:

$ minikube start -p test
😄  [test] minikube v1.9.2 on Arch rolling
✨  Automatically selected the docker driver
👍  Starting control plane node m01 in cluster test
🚜  Pulling base image ...
🔥  Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=8096MB
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🌟  Enabling addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "test"

Apply Statefulset with a provisionned volume using subPath.

kubectl  apply -f https://gist.githubusercontent.com/isra17/ead482c2c5230e78d3d80be44877bfe0/raw/bdc9f4b0fc9baf043393152cfc0f47e7547cdbb7/objects.yaml

At that point the pod should be running.

Restart Minikube

$ minikube stop -p test
$ minikube start -p test

Pod won't restart

$ kubectl describe pod test-statefulset-0
Name:         test-statefulset-0
Namespace:    default
Priority:     0
Node:         test/172.17.0.3
Start Time:   Wed, 13 May 2020 09:40:06 -0400
Labels:       controller-revision-hash=test-statefulset-66db94bc56
              name=test
              statefulset.kubernetes.io/pod-name=test-statefulset-0
Annotations:  <none>
Status:       Running
IP:           172.18.0.2
IPs:
  IP:           172.18.0.2
Controlled By:  StatefulSet/test-statefulset
Containers:
  ubuntu:
    Container ID:   docker://f378ed9cc4cd7c81cae762c59eb46ac0532820a1a95a4fee6b1fc53d713edc83
    Image:          k8s.gcr.io/echoserver:1.4
    Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CreateContainerConfigError
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 13 May 2020 09:40:19 -0400
      Finished:     Wed, 13 May 2020 09:40:54 -0400
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q59vp (ro)
      /volume from test-volume-claim (rw,path="subpath")
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  test-volume-claim:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  test-volume-claim-test-statefulset-0
    ReadOnly:   false
  default-token-q59vp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-q59vp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  90s (x2 over 90s)  default-scheduler  running "VolumeBinding" filter plugin for pod "test-statefulset-0": pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         88s                default-scheduler  Successfully assigned default/test-statefulset-0 to test
  Normal   Pulling           87s                kubelet, test      Pulling image "k8s.gcr.io/echoserver:1.4"
  Normal   Pulled            75s                kubelet, test      Successfully pulled image "k8s.gcr.io/echoserver:1.4"
  Normal   Created           75s                kubelet, test      Created container ubuntu
  Normal   Started           75s                kubelet, test      Started container ubuntu
  Normal   SandboxChanged    12s                kubelet, test      Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled            11s (x2 over 12s)  kubelet, test      Container image "k8s.gcr.io/echoserver:1.4" already present on machine
  Warning  Failed            11s (x2 over 12s)  kubelet, test      Error: stat /tmp/hostpath-provisioner/pvc-823397a6-ff98-4bbb-9c49-2f5b45195933: no such file or directory

$ minikube logs -p test

$ minikube logs -p test
==> Docker <==
-- Logs begin at Wed 2020-05-13 13:47:37 UTC, end at Wed 2020-05-13 13:48:22 UTC. --
May 13 13:47:53 test dockerd[394]: time="2020-05-13T13:47:53.474499375Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7a0b7141e56004f11930bd8e9fca2ff054c2be30c2cb63261175d4d7aa9d9dd1.sock debug=false pid=1616
May 13 13:47:53 test dockerd[394]: time="2020-05-13T13:47:53.567716115Z" level=info msg="shim containerd-shim started" address=/containerd-shim/419e64139a148256d8146ac6947e9d3ce45eeaea7de020c72478ad964da1d9a9.sock debug=false pid=1633
May 13 13:47:53 test dockerd[394]: time="2020-05-13T13:47:53.575254050Z" level=info msg="shim containerd-shim started" address=/containerd-shim/d5b6c54a45fb1db4ebf6d0f1ec9f4c8a2431f2482c4ea8901da580f9d640a4c6.sock debug=false pid=1656
May 13 13:47:53 test dockerd[394]: time="2020-05-13T13:47:53.646619953Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f2c6e2a440aadb6095b8e480c3f0ad44964395bd780faac5265b360de50cb04a.sock debug=false pid=1707
May 13 13:47:53 test sudo[1724]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/pgrep -xnf kube-apiserver.*minikube.*
May 13 13:47:53 test sudo[1724]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:47:53 test sudo[1724]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:47:53 test sudo[1724]: pam_unix(sudo:session): session closed for user root
May 13 13:47:53 test dockerd[394]: time="2020-05-13T13:47:53.714410622Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7bf0eede9d4d237df7365bcd5d320423d371f56f9788359e1e2c34589171c34e.sock debug=false pid=1753
May 13 13:47:54 test sudo[1807]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/pgrep -xnf kube-apiserver.*minikube.*
May 13 13:47:54 test sudo[1807]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:47:54 test sudo[1807]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:47:54 test sudo[1807]: pam_unix(sudo:session): session closed for user root
May 13 13:47:54 test dockerd[394]: time="2020-05-13T13:47:54.276274518Z" level=info msg="shim containerd-shim started" address=/containerd-shim/51c89c70741faf2210a4c0070628b5a1d1931d02fc239743297982cd9fdfd4e3.sock debug=false pid=1813
May 13 13:47:54 test dockerd[394]: time="2020-05-13T13:47:54.311851506Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e519201ea92818471a35cf20ac284810e66095f9921fb4b25a2e87b7bbb57f55.sock debug=false pid=1837
May 13 13:47:54 test dockerd[394]: time="2020-05-13T13:47:54.322022008Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bd5ca1a25fa0262b0888b37e49b0776bbb197dbd4f8d87f57f0e5fbe1284d6b4.sock debug=false pid=1856
May 13 13:47:54 test dockerd[394]: time="2020-05-13T13:47:54.339234340Z" level=info msg="shim containerd-shim started" address=/containerd-shim/63223dbf25e4214371a4e59e81f9ef9e811692abb56f8d580ce3eddb4c7eb8e0.sock debug=false pid=1897
May 13 13:47:54 test dockerd[394]: time="2020-05-13T13:47:54.340232935Z" level=info msg="shim containerd-shim started" address=/containerd-shim/8756721925bde235d79a7057d018811228aa825c3626b6af34af8cb70b934e74.sock debug=false pid=1901
May 13 13:47:54 test dockerd[394]: time="2020-05-13T13:47:54.380865194Z" level=info msg="shim reaped" id=b05bdb2b5bfd5270274e5dbcdd7e36d1f1a5fef90c8606728ef7902b9b55af9c
May 13 13:47:54 test dockerd[394]: time="2020-05-13T13:47:54.391949615Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 13 13:47:54 test dockerd[394]: time="2020-05-13T13:47:54.392052191Z" level=warning msg="b05bdb2b5bfd5270274e5dbcdd7e36d1f1a5fef90c8606728ef7902b9b55af9c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b05bdb2b5bfd5270274e5dbcdd7e36d1f1a5fef90c8606728ef7902b9b55af9c/mounts/shm, flags: 0x2: no such file or directory"
May 13 13:47:54 test sudo[2055]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/pgrep -xnf kube-apiserver.*minikube.*
May 13 13:47:54 test sudo[2055]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:47:54 test sudo[2055]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:47:54 test sudo[2055]: pam_unix(sudo:session): session closed for user root
May 13 13:47:59 test sudo[2314]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/env PATH=/var/lib/minikube/binaries/v1.18.0:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml
May 13 13:47:59 test sudo[2314]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:47:59 test sudo[2314]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:47:59 test sudo[2314]: pam_unix(sudo:session): session closed for user root
May 13 13:48:00 test sudo[2336]:     root : TTY=unknown ; PWD=/ ; USER=root ; ENV=KUBECONFIG=/var/lib/minikube/kubeconfig ; COMMAND=/var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
May 13 13:48:00 test sudo[2336]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:48:00 test sudo[2336]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:48:00 test sudo[2336]: pam_unix(sudo:session): session closed for user root
May 13 13:48:00 test sudo[2352]:     root : TTY=unknown ; PWD=/ ; USER=root ; ENV=KUBECONFIG=/var/lib/minikube/kubeconfig ; COMMAND=/var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
May 13 13:48:00 test sudo[2352]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:48:00 test sudo[2352]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:48:01 test sudo[2352]: pam_unix(sudo:session): session closed for user root
May 13 13:48:09 test sudo[2555]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/journalctl -u docker -n 60
May 13 13:48:09 test sudo[2555]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:48:09 test sudo[2555]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:48:09 test sudo[2555]: pam_unix(sudo:session): session closed for user root
May 13 13:48:10 test sudo[2568]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/local/bin/crictl ps -a
May 13 13:48:10 test sudo[2568]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:48:10 test sudo[2568]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:48:10 test sudo[2568]: pam_unix(sudo:session): session closed for user root
May 13 13:48:10 test sudo[2603]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig
May 13 13:48:10 test sudo[2603]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:48:10 test sudo[2603]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:48:10 test sudo[2603]: pam_unix(sudo:session): session closed for user root
May 13 13:48:10 test sudo[2624]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/dmesg -PH -L=never --level warn,err,crit,alert,emerg
May 13 13:48:10 test sudo[2624]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:48:10 test sudo[2624]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:48:10 test sudo[2624]: pam_unix(sudo:session): session closed for user root
May 13 13:48:12 test sudo[2758]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/journalctl -u kubelet -n 60
May 13 13:48:12 test sudo[2758]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:48:12 test sudo[2758]: pam_unix(sudo:session): session opened for user root by (uid=0)
May 13 13:48:12 test sudo[2758]: pam_unix(sudo:session): session closed for user root
May 13 13:48:22 test sudo[2920]:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/journalctl -u docker -n 60
May 13 13:48:22 test sudo[2920]: pam_env(sudo:session): Unable to open env file: /etc/default/locale: No such file or directory
May 13 13:48:22 test sudo[2920]: pam_unix(sudo:session): session opened for user root by (uid=0)

==> container status <==
CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
c3929176040d4       303ce5db0e90d                                                                                   30 seconds ago      Running             etcd                      0                   b1062313cff81
45171cca54014       a31f78c7c8ce1                                                                                   30 seconds ago      Running             kube-scheduler            2                   34558faa69841
b05bdb2b5bfd5       303ce5db0e90d                                                                                   30 seconds ago      Exited              etcd                      2                   c62db40c4ec22
49ededb937d84       74060cea7f704                                                                                   30 seconds ago      Running             kube-apiserver            0                   adc61e4f515f3
1842b516ecb84       d3e55153f52fb                                                                                   30 seconds ago      Running             kube-controller-manager   2                   e5309da29c254
bd4abaeb5b8e7       67da37a9a360e                                                                                   6 minutes ago       Exited              coredns                   2                   9b0479671e904
37c07b76e2149       67da37a9a360e                                                                                   6 minutes ago       Exited              coredns                   2                   19acb6816820b
171780a64b204       4689081edb103                                                                                   6 minutes ago       Exited              storage-provisioner       2                   83371d1c3ceb8
a92cd62a6027f       aa67fec7d7ef7                                                                                   6 minutes ago       Exited              kindnet-cni               2                   a8dca6e796bca
503fbb40f73e1       43940c34f24f3                                                                                   7 minutes ago       Exited              kube-proxy                1                   dd39e740b8fe0
d921b5840eedc       d3e55153f52fb                                                                                   7 minutes ago       Exited              kube-controller-manager   1                   17f0c4976af7e
6b96b887f044e       74060cea7f704                                                                                   7 minutes ago       Exited              kube-apiserver            1                   c4372c266f4c8
cf9a5808a6ee5       a31f78c7c8ce1                                                                                   7 minutes ago       Exited              kube-scheduler            1                   8c8c2ef0a9039
f378ed9cc4cd7       k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb   8 minutes ago       Exited              ubuntu                    0                   79d0aa2f2bb87

==> coredns [37c07b76e214] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
E0513 13:47:22.073394       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1303&timeout=5m19s&timeoutSeconds=319&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0513 13:47:22.073604       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=2233&timeout=8m26s&timeoutSeconds=506&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0513 13:47:22.073918       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1303&timeout=7m7s&timeoutSeconds=427&watch=true: dial tcp 10.96.0.1:443: connect: connection refused

==> coredns [bd4abaeb5b8e] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> describe nodes <==
Name:               test
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=test
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393
                    minikube.k8s.io/name=test
                    minikube.k8s.io/updated_at=2020_05_13T09_34_30_0700
                    minikube.k8s.io/version=v1.9.2
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 13 May 2020 13:34:26 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  test
  AcquireTime:     <unset>
  RenewTime:       Wed, 13 May 2020 13:47:20 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 13 May 2020 13:46:22 +0000   Wed, 13 May 2020 13:34:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 13 May 2020 13:46:22 +0000   Wed, 13 May 2020 13:34:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 13 May 2020 13:46:22 +0000   Wed, 13 May 2020 13:34:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 13 May 2020 13:46:22 +0000   Wed, 13 May 2020 13:34:46 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.17.0.3
  Hostname:    test
Capacity:
  cpu:                4
  ephemeral-storage:  240232960Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32777108Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  240232960Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32777108Ki
  pods:               110
System Info:
  Machine ID:                 ed47d8fb8ba44d27912a804bbc124f21
  System UUID:                05c6f637-99c4-4e9a-9e40-5d1fe8b77c5e
  Boot ID:                    9975863c-53d7-4723-bc52-4ca165d015b8
  Kernel Version:             5.6.11-arch1-1
  OS Image:                   Ubuntu 19.10
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.2
  Kubelet Version:            v1.18.0
  Kube-Proxy Version:         v1.18.0
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (10 in total)
  Namespace                   Name                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                            ------------  ----------  ---------------  -------------  ---
  default                     test-statefulset-0              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
  kube-system                 coredns-66bff467f8-dmbqx        100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
  kube-system                 coredns-66bff467f8-s5chg        100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
  kube-system                 etcd-test                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
  kube-system                 kindnet-5x6vl                   100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)      13m
  kube-system                 kube-apiserver-test             250m (6%)     0 (0%)      0 (0%)           0 (0%)         13m
  kube-system                 kube-controller-manager-test    200m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
  kube-system                 kube-proxy-hwrqb                0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
  kube-system                 kube-scheduler-test             100m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
  kube-system                 storage-provisioner             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (21%)  100m (2%)
  memory             190Mi (0%)  390Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                  From              Message
  ----    ------                   ----                 ----              -------
  Normal  NodeHasSufficientMemory  14m (x4 over 14m)    kubelet, test     Node test status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    14m (x5 over 14m)    kubelet, test     Node test status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     14m (x4 over 14m)    kubelet, test     Node test status is now: NodeHasSufficientPID
  Normal  Starting                 13m                  kubelet, test     Starting kubelet.
  Normal  NodeHasSufficientMemory  13m                  kubelet, test     Node test status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    13m                  kubelet, test     Node test status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     13m                  kubelet, test     Node test status is now: NodeHasSufficientPID
  Normal  NodeNotReady             13m                  kubelet, test     Node test status is now: NodeNotReady
  Normal  NodeAllocatableEnforced  13m                  kubelet, test     Updated Node Allocatable limit across pods
  Normal  NodeReady                13m                  kubelet, test     Node test status is now: NodeReady
  Normal  Starting                 13m                  kube-proxy, test  Starting kube-proxy.
  Normal  Starting                 7m7s                 kubelet, test     Starting kubelet.
  Normal  NodeHasSufficientMemory  7m7s (x8 over 7m7s)  kubelet, test     Node test status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m7s (x8 over 7m7s)  kubelet, test     Node test status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     7m7s (x7 over 7m7s)  kubelet, test     Node test status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  7m7s                 kubelet, test     Updated Node Allocatable limit across pods
  Normal  Starting                 7m1s                 kube-proxy, test  Starting kube-proxy.

==> dmesg <==
[  +0.000004]  activate_task+0x20b/0x3c0
[  +0.000003]  ? sched_clock+0x5/0x10
[  +0.000001]  ttwu_do_activate+0x45/0x60
[  +0.000001]  try_to_wake_up+0x24a/0x750
[  +0.000003]  signal_wake_up_state+0x15/0x30
[  +0.000001]  __send_signal+0x1e4/0x410
[  +0.000002]  do_notify_parent+0x27d/0x2c0
[  +0.000002]  release_task+0x3e7/0x450
[  +0.000001]  do_exit+0x709/0xb40
[  +0.000003]  ? __switch_to_asm+0x40/0x70
[  +0.000001]  ? __switch_to_asm+0x40/0x70
[  +0.000001]  ? __switch_to_asm+0x34/0x70
[  +0.000002]  do_group_exit+0x3a/0xa0
[  +0.000001]  get_signal+0x132/0x8c0
[  +0.000003]  do_signal+0x43/0x680
[  +0.000001]  ? do_nanosleep+0xba/0x170
[  +0.000003]  exit_to_usermode_loop+0x7f/0x100
[  +0.000002]  do_syscall_64+0x11f/0x150
[  +0.000001]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0.000002] RIP: 0033:0x45ad2d
[  +0.000003] Code: Bad RIP value.
[  +0.000000] RSP: 002b:00007f2dfff3ac68 EFLAGS: 00000202 ORIG_RAX: 0000000000000023
[  +0.000001] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000045ad2d
[  +0.000000] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f2dfff3ac68
[  +0.000001] RBP: 00007f2dfff3ac78 R08: 0000000000000000 R09: 0000000000000000
[  +0.000000] R10: 0000000000000008 R11: 0000000000000202 R12: 00007f2dfff3ae80
[  +0.000001] R13: 0000000000000000 R14: 0000000002085b60 R15: 00007fff4d377f20
[  +0.000001] ---[ end trace 6b6168395cb31f9c ]---
[  +0.785500] kauditd_printk_skb: 80 callbacks suppressed
[  +6.887254] kauditd_printk_skb: 26 callbacks suppressed
[  +8.719362] kauditd_printk_skb: 15 callbacks suppressed
[May13 13:31] kauditd_printk_skb: 18 callbacks suppressed
[May13 13:32] kauditd_printk_skb: 14 callbacks suppressed
[May13 13:34] device-mapper: thin: Deletion of thin device 4569 failed.
[  +0.024301] device-mapper: ioctl: remove_all left 3 open device(s)
[  +0.178364] kauditd_printk_skb: 49 callbacks suppressed
[  +5.027559] kauditd_printk_skb: 322 callbacks suppressed
[  +9.950978] kauditd_printk_skb: 43 callbacks suppressed
[  +8.194707] kauditd_printk_skb: 5 callbacks suppressed
[  +8.440263] kauditd_printk_skb: 34 callbacks suppressed
[ +11.524546] kauditd_printk_skb: 12 callbacks suppressed
[May13 13:40] kauditd_printk_skb: 14 callbacks suppressed
[ +10.740999] kauditd_printk_skb: 6 callbacks suppressed
[May13 13:41] kauditd_printk_skb: 12 callbacks suppressed
[  +5.053340] kauditd_printk_skb: 153 callbacks suppressed
[  +5.061953] kauditd_printk_skb: 174 callbacks suppressed
[  +9.468556] kauditd_printk_skb: 71 callbacks suppressed
[ +13.716949] kauditd_printk_skb: 27 callbacks suppressed
[May13 13:46] kauditd_printk_skb: 20 callbacks suppressed
[ +24.112314] kauditd_printk_skb: 20 callbacks suppressed
[May13 13:47] kauditd_printk_skb: 5 callbacks suppressed
[ +10.240911] kauditd_printk_skb: 4 callbacks suppressed
[ +14.839035] kauditd_printk_skb: 5 callbacks suppressed
[  +8.987803] kauditd_printk_skb: 12 callbacks suppressed
[  +5.879428] kauditd_printk_skb: 149 callbacks suppressed
[  +5.426731] kauditd_printk_skb: 188 callbacks suppressed
[  +5.498690] kauditd_printk_skb: 73 callbacks suppressed
[  +6.333289] kauditd_printk_skb: 14 callbacks suppressed
[May13 13:48] kauditd_printk_skb: 8 callbacks suppressed
[ +12.907649] kauditd_printk_skb: 20 callbacks suppressed

==> etcd [b05bdb2b5bfd] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-05-13 13:47:54.335440 I | etcdmain: etcd Version: 3.4.3
2020-05-13 13:47:54.335464 I | etcdmain: Git SHA: 3cf2f69b5
2020-05-13 13:47:54.335467 I | etcdmain: Go Version: go1.12.12
2020-05-13 13:47:54.335472 I | etcdmain: Go OS/Arch: linux/amd64
2020-05-13 13:47:54.335474 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2020-05-13 13:47:54.335511 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-05-13 13:47:54.335539 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-05-13 13:47:54.335659 C | etcdmain: listen tcp 172.17.0.3:2380: bind: cannot assign requested address

==> etcd [c3929176040d] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-05-13 13:47:54.389664 I | etcdmain: etcd Version: 3.4.3
2020-05-13 13:47:54.389698 I | etcdmain: Git SHA: 3cf2f69b5
2020-05-13 13:47:54.389701 I | etcdmain: Go Version: go1.12.12
2020-05-13 13:47:54.389703 I | etcdmain: Go OS/Arch: linux/amd64
2020-05-13 13:47:54.389706 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2020-05-13 13:47:54.389741 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-05-13 13:47:54.389761 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-05-13 13:47:54.390195 I | embed: name = test
2020-05-13 13:47:54.390204 I | embed: data dir = /var/lib/minikube/etcd
2020-05-13 13:47:54.390207 I | embed: member dir = /var/lib/minikube/etcd/member
2020-05-13 13:47:54.390209 I | embed: heartbeat = 100ms
2020-05-13 13:47:54.390211 I | embed: election = 1000ms
2020-05-13 13:47:54.390214 I | embed: snapshot count = 10000
2020-05-13 13:47:54.390223 I | embed: advertise client URLs = https://172.17.0.2:2379
2020-05-13 13:47:54.390226 I | embed: initial advertise peer URLs = https://172.17.0.2:2380
2020-05-13 13:47:54.390229 I | embed: initial cluster =
2020-05-13 13:47:54.402897 I | etcdserver: restarting member b273bc7741bcb020 in cluster 86482fea2286a1d2 at commit index 2482
raft2020/05/13 13:47:54 INFO: b273bc7741bcb020 switched to configuration voters=()
raft2020/05/13 13:47:54 INFO: b273bc7741bcb020 became follower at term 3
raft2020/05/13 13:47:54 INFO: newRaft b273bc7741bcb020 [peers: [], term: 3, commit: 2482, applied: 0, lastindex: 2482, lastterm: 3]
2020-05-13 13:47:54.544509 W | auth: simple token is not cryptographically signed
2020-05-13 13:47:54.609399 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-05-13 13:47:54.610944 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-05-13 13:47:54.611037 I | embed: listening for metrics on http://127.0.0.1:2381
2020-05-13 13:47:54.611325 I | embed: listening for peers on 172.17.0.2:2380
raft2020/05/13 13:47:54 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-05-13 13:47:54.611537 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
2020-05-13 13:47:54.611588 N | etcdserver/membership: set the initial cluster version to 3.4
2020-05-13 13:47:54.611614 I | etcdserver/api: enabled capabilities for version 3.4
2020-05-13 13:47:54.793155 W | etcdserver: request "ID:12691269155536321281 Method:\"PUT\" Path:\"/0/members/b273bc7741bcb020/attributes\" Val:\"{\\\"name\\\":\\\"test\\\",\\\"clientURLs\\\":[\\\"https://172.17.0.3:2379\\\"]}\" " with result "" took too long (139.470432ms) to execute
raft2020/05/13 13:47:55 INFO: b273bc7741bcb020 is starting a new election at term 3
raft2020/05/13 13:47:55 INFO: b273bc7741bcb020 became candidate at term 4
raft2020/05/13 13:47:55 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 4
raft2020/05/13 13:47:55 INFO: b273bc7741bcb020 became leader at term 4
raft2020/05/13 13:47:55 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 4
2020-05-13 13:47:55.738378 I | etcdserver: published {Name:test ClientURLs:[https://172.17.0.2:2379]} to cluster 86482fea2286a1d2
2020-05-13 13:47:55.738391 I | embed: ready to serve client requests
2020-05-13 13:47:55.738497 I | embed: ready to serve client requests
2020-05-13 13:47:55.739196 I | embed: serving client requests on 127.0.0.1:2379
2020-05-13 13:47:55.739254 I | embed: serving client requests on 172.17.0.2:2379

==> kernel <==
 13:48:23 up  9:40,  0 users,  load average: 1.52, 1.42, 1.46
Linux test 5.6.11-arch1-1 #1 SMP PREEMPT Wed, 06 May 2020 17:32:37 +0000 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [49ededb937d8] <==
I0513 13:47:56.320235       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0513 13:47:56.326550       1 client.go:361] parsed scheme: "endpoint"
I0513 13:47:56.326652       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
W0513 13:47:56.423671       1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W0513 13:47:56.430316       1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0513 13:47:56.437938       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0513 13:47:56.450242       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0513 13:47:56.452535       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0513 13:47:56.464252       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0513 13:47:56.479695       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0513 13:47:56.479724       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0513 13:47:56.487430       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0513 13:47:56.487454       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0513 13:47:56.488779       1 client.go:361] parsed scheme: "endpoint"
I0513 13:47:56.488798       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0513 13:47:56.494742       1 client.go:361] parsed scheme: "endpoint"
I0513 13:47:56.494762       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0513 13:47:57.895089       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0513 13:47:57.895099       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0513 13:47:57.895340       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0513 13:47:57.895508       1 secure_serving.go:178] Serving securely on [::]:8443
I0513 13:47:57.895547       1 crd_finalizer.go:266] Starting CRDFinalizer
I0513 13:47:57.895562       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0513 13:47:57.895568       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0513 13:47:57.895607       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0513 13:47:57.898554       1 available_controller.go:387] Starting AvailableConditionController
I0513 13:47:57.898569       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0513 13:47:57.895632       1 controller.go:81] Starting OpenAPI AggregationController
I0513 13:47:57.895649       1 establishing_controller.go:76] Starting EstablishingController
I0513 13:47:57.895690       1 controller.go:86] Starting OpenAPI controller
I0513 13:47:57.895697       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0513 13:47:57.895702       1 naming_controller.go:291] Starting NamingConditionController
I0513 13:47:57.896060       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0513 13:47:57.896066       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0513 13:47:57.896218       1 autoregister_controller.go:141] Starting autoregister controller
I0513 13:47:57.898647       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0513 13:47:57.896233       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0513 13:47:57.898655       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0513 13:47:57.896552       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0513 13:47:57.898663       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0513 13:47:57.897128       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0513 13:47:57.897134       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
E0513 13:47:57.899304       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg:
I0513 13:47:57.995783       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0513 13:47:57.998764       1 shared_informer.go:230] Caches are synced for crd-autoregister
I0513 13:47:57.998847       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0513 13:47:57.998855       1 cache.go:39] Caches are synced for autoregister controller
I0513 13:47:57.998869       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0513 13:47:58.894991       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0513 13:47:58.895123       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0513 13:47:58.897423       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
W0513 13:47:59.046566       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0513 13:47:59.047726       1 controller.go:606] quota admission added evaluator for: endpoints
I0513 13:47:59.072855       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0513 13:47:59.596595       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0513 13:47:59.612529       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0513 13:47:59.675032       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0513 13:47:59.740328       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0513 13:47:59.748243       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0513 13:48:14.762237       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io

==> kube-apiserver [6b96b887f044] <==
I0513 13:41:20.730112       1 crd_finalizer.go:266] Starting CRDFinalizer
I0513 13:41:20.730124       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0513 13:41:20.730131       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0513 13:41:20.730139       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0513 13:41:20.730148       1 naming_controller.go:291] Starting NamingConditionController
I0513 13:41:20.730163       1 establishing_controller.go:76] Starting EstablishingController
I0513 13:41:20.730172       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0513 13:41:20.730179       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0513 13:41:20.730291       1 controller.go:86] Starting OpenAPI controller
I0513 13:41:20.730966       1 available_controller.go:387] Starting AvailableConditionController
I0513 13:41:20.730975       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0513 13:41:20.730987       1 autoregister_controller.go:141] Starting autoregister controller
I0513 13:41:20.730990       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0513 13:41:20.732800       1 controller.go:81] Starting OpenAPI AggregationController
I0513 13:41:20.733463       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0513 13:41:20.733526       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0513 13:41:20.732853       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0513 13:41:20.738199       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0513 13:41:20.738283       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0513 13:41:20.738354       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0513 13:41:20.835418       1 cache.go:39] Caches are synced for autoregister controller
I0513 13:41:20.835887       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0513 13:41:20.836488       1 cache.go:39] Caches are synced for AvailableConditionController controller
E0513 13:41:20.835505       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0513 13:41:20.835460       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0513 13:41:20.843269       1 shared_informer.go:230] Caches are synced for crd-autoregister
I0513 13:41:20.854319       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0513 13:41:21.729621       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0513 13:41:21.729757       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0513 13:41:21.735155       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0513 13:41:22.426926       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0513 13:41:22.439437       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0513 13:41:22.466389       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0513 13:41:22.489316       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0513 13:41:22.495639       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0513 13:41:38.218094       1 controller.go:606] quota admission added evaluator for: endpoints
I0513 13:41:51.150862       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0513 13:43:39.856163       1 controller.go:606] quota admission added evaluator for: statefulsets.apps
I0513 13:47:22.071880       1 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0513 13:47:22.072176       1 controller.go:123] Shutting down OpenAPI controller
I0513 13:47:22.072187       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I0513 13:47:22.072200       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0513 13:47:22.072207       1 available_controller.go:399] Shutting down AvailableConditionController
I0513 13:47:22.072215       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0513 13:47:22.072222       1 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
I0513 13:47:22.072230       1 establishing_controller.go:87] Shutting down EstablishingController
I0513 13:47:22.072236       1 naming_controller.go:302] Shutting down NamingConditionController
I0513 13:47:22.072242       1 customresource_discovery_controller.go:220] Shutting down DiscoveryController
I0513 13:47:22.072248       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
I0513 13:47:22.072275       1 crd_finalizer.go:278] Shutting down CRDFinalizer
I0513 13:47:22.072283       1 autoregister_controller.go:165] Shutting down autoregister controller
I0513 13:47:22.072528       1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0513 13:47:22.072535       1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0513 13:47:22.072574       1 controller.go:87] Shutting down OpenAPI AggregationController
I0513 13:47:22.072599       1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0513 13:47:22.072793       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController
I0513 13:47:22.072805       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0513 13:47:22.072818       1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0513 13:47:22.073816       1 secure_serving.go:222] Stopped listening on [::]:8443
E0513 13:47:22.075701       1 controller.go:184] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-controller-manager [1842b516ecb8] <==
I0513 13:48:16.746159       1 shared_informer.go:223] Waiting for caches to sync for tokens
I0513 13:48:16.846438       1 shared_informer.go:230] Caches are synced for tokens
I0513 13:48:17.005167       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0513 13:48:17.005206       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0513 13:48:17.005228       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0513 13:48:17.005254       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0513 13:48:17.005340       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0513 13:48:17.005464       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0513 13:48:17.005493       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0513 13:48:17.005520       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0513 13:48:17.005830       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0513 13:48:17.006062       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0513 13:48:17.006267       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0513 13:48:17.006400       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0513 13:48:17.006546       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0513 13:48:17.006721       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0513 13:48:17.006817       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0513 13:48:17.006912       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0513 13:48:17.007005       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0513 13:48:17.007104       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0513 13:48:17.007193       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0513 13:48:17.007329       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0513 13:48:17.007503       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0513 13:48:17.007586       1 controllermanager.go:533] Started "resourcequota"
I0513 13:48:17.007899       1 resource_quota_controller.go:272] Starting resource quota controller
I0513 13:48:17.007990       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0513 13:48:17.008085       1 resource_quota_monitor.go:303] QuotaMonitor running
I0513 13:48:17.025561       1 controllermanager.go:533] Started "horizontalpodautoscaling"
I0513 13:48:17.025639       1 horizontal.go:169] Starting HPA controller
I0513 13:48:17.025646       1 shared_informer.go:223] Waiting for caches to sync for HPA
I0513 13:48:17.028892       1 controllermanager.go:533] Started "cronjob"
W0513 13:48:17.028906       1 controllermanager.go:525] Skipping "root-ca-cert-publisher"
I0513 13:48:17.028969       1 cronjob_controller.go:97] Starting CronJob Manager
E0513 13:48:17.032924       1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0513 13:48:17.032943       1 controllermanager.go:525] Skipping "service"
W0513 13:48:17.032950       1 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
W0513 13:48:17.032954       1 controllermanager.go:525] Skipping "route"
I0513 13:48:17.036873       1 controllermanager.go:533] Started "pvc-protection"
I0513 13:48:17.036963       1 pvc_protection_controller.go:101] Starting PVC protection controller
I0513 13:48:17.036972       1 shared_informer.go:223] Waiting for caches to sync for PVC protection
I0513 13:48:17.041054       1 controllermanager.go:533] Started "podgc"
I0513 13:48:17.041304       1 gc_controller.go:89] Starting GC controller
I0513 13:48:17.041386       1 shared_informer.go:223] Waiting for caches to sync for GC
I0513 13:48:17.046326       1 controllermanager.go:533] Started "serviceaccount"
I0513 13:48:17.046345       1 serviceaccounts_controller.go:117] Starting service account controller
I0513 13:48:17.046353       1 shared_informer.go:223] Waiting for caches to sync for service account
I0513 13:48:17.050317       1 controllermanager.go:533] Started "daemonset"
I0513 13:48:17.050353       1 daemon_controller.go:257] Starting daemon sets controller
I0513 13:48:17.050359       1 shared_informer.go:223] Waiting for caches to sync for daemon sets
I0513 13:48:17.161294       1 controllermanager.go:533] Started "bootstrapsigner"
I0513 13:48:17.161384       1 shared_informer.go:223] Waiting for caches to sync for bootstrap_signer
I0513 13:48:17.970616       1 garbagecollector.go:133] Starting garbage collector controller
I0513 13:48:17.970633       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0513 13:48:17.970647       1 graph_builder.go:282] GraphBuilder running
I0513 13:48:17.970701       1 controllermanager.go:533] Started "garbagecollector"
I0513 13:48:17.978505       1 controllermanager.go:533] Started "disruption"
I0513 13:48:17.978522       1 disruption.go:331] Starting disruption controller
I0513 13:48:17.978528       1 shared_informer.go:223] Waiting for caches to sync for disruption
I0513 13:48:17.984783       1 node_ipam_controller.go:94] Sending events to api server.
I0513 13:48:18.154203       1 request.go:621] Throttling request took 1.047600583s, request: GET:https://172.17.0.2:8443/apis/coordination.k8s.io/v1?timeout=32s

==> kube-controller-manager [d921b5840eed] <==
I0513 13:41:50.383825       1 shared_informer.go:223] Waiting for caches to sync for PV protection
I0513 13:41:50.533245       1 controllermanager.go:533] Started "job"
I0513 13:41:50.533296       1 job_controller.go:144] Starting job controller
I0513 13:41:50.533304       1 shared_informer.go:223] Waiting for caches to sync for job
I0513 13:41:50.684356       1 controllermanager.go:533] Started "replicaset"
I0513 13:41:50.684386       1 replica_set.go:181] Starting replicaset controller
I0513 13:41:50.684400       1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet
I0513 13:41:50.833478       1 controllermanager.go:533] Started "tokencleaner"
I0513 13:41:50.833492       1 tokencleaner.go:118] Starting token cleaner controller
I0513 13:41:50.833707       1 shared_informer.go:223] Waiting for caches to sync for token_cleaner
I0513 13:41:50.833716       1 shared_informer.go:230] Caches are synced for token_cleaner
I0513 13:41:50.983102       1 node_lifecycle_controller.go:78] Sending events to api server
E0513 13:41:50.983325       1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided
W0513 13:41:50.983397       1 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I0513 13:41:51.134190       1 controllermanager.go:533] Started "pvc-protection"
I0513 13:41:51.134223       1 pvc_protection_controller.go:101] Starting PVC protection controller
I0513 13:41:51.134258       1 shared_informer.go:223] Waiting for caches to sync for PVC protection
I0513 13:41:51.138337       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0513 13:41:51.143279       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0513 13:41:51.148068       1 shared_informer.go:230] Caches are synced for endpoint_slice
W0513 13:41:51.148530       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test" does not exist
I0513 13:41:51.148564       1 shared_informer.go:230] Caches are synced for persistent volume
I0513 13:41:51.151279       1 shared_informer.go:230] Caches are synced for GC
I0513 13:41:51.154967       1 shared_informer.go:230] Caches are synced for service account
I0513 13:41:51.158308       1 shared_informer.go:230] Caches are synced for stateful set
I0513 13:41:51.183595       1 shared_informer.go:230] Caches are synced for HPA
I0513 13:41:51.183929       1 shared_informer.go:230] Caches are synced for PV protection
I0513 13:41:51.184000       1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0513 13:41:51.184529       1 shared_informer.go:230] Caches are synced for ReplicaSet
I0513 13:41:51.187247       1 shared_informer.go:230] Caches are synced for deployment
I0513 13:41:51.192669       1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0513 13:41:51.209102       1 shared_informer.go:230] Caches are synced for TTL
I0513 13:41:51.212449       1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0513 13:41:51.219101       1 shared_informer.go:230] Caches are synced for expand
I0513 13:41:51.223030       1 shared_informer.go:230] Caches are synced for endpoint
I0513 13:41:51.229806       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"test-service", UID:"8da9befa-4b23-48b6-a7c1-927740d87a3d", APIVersion:"v1", ResourceVersion:"1243", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint default/test-service: Operation cannot be fulfilled on endpoints "test-service": the object has been modified; please apply your changes to the latest version and try again
I0513 13:41:51.233417       1 shared_informer.go:230] Caches are synced for job
I0513 13:41:51.234359       1 shared_informer.go:230] Caches are synced for PVC protection
I0513 13:41:51.236783       1 shared_informer.go:230] Caches are synced for node
I0513 13:41:51.236802       1 range_allocator.go:172] Starting range CIDR allocator
I0513 13:41:51.236804       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
I0513 13:41:51.236807       1 shared_informer.go:230] Caches are synced for cidrallocator
I0513 13:41:51.245917       1 shared_informer.go:230] Caches are synced for namespace
I0513 13:41:51.343110       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0513 13:41:51.365747       1 shared_informer.go:230] Caches are synced for taint
I0513 13:41:51.365804       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
I0513 13:41:51.365826       1 taint_manager.go:187] Starting NoExecuteTaintManager
W0513 13:41:51.365849       1 node_lifecycle_controller.go:1048] Missing timestamp for Node test. Assuming now as a timestamp.
I0513 13:41:51.365870       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
I0513 13:41:51.366091       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"test", UID:"aed9ac24-52d4-467b-bbba-09384311ce79", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node test event: Registered Node test in Controller
I0513 13:41:51.373392       1 shared_informer.go:230] Caches are synced for daemon sets
I0513 13:41:51.536338       1 shared_informer.go:230] Caches are synced for ReplicationController
I0513 13:41:51.603310       1 shared_informer.go:230] Caches are synced for disruption
I0513 13:41:51.603325       1 disruption.go:339] Sending events to api server.
I0513 13:41:51.604178       1 shared_informer.go:230] Caches are synced for resource quota
I0513 13:41:51.640676       1 shared_informer.go:230] Caches are synced for resource quota
I0513 13:41:51.743579       1 shared_informer.go:230] Caches are synced for garbage collector
I0513 13:41:51.769914       1 shared_informer.go:230] Caches are synced for attach detach
I0513 13:41:51.831938       1 shared_informer.go:230] Caches are synced for garbage collector
I0513 13:41:51.831970       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage

==> kube-proxy [503fbb40f73e] <==
W0513 13:41:22.015364       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0513 13:41:22.020570       1 node.go:136] Successfully retrieved node IP: 172.17.0.3
I0513 13:41:22.020592       1 server_others.go:186] Using iptables Proxier.
I0513 13:41:22.020847       1 server.go:583] Version: v1.18.0
I0513 13:41:22.021270       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0513 13:41:22.021342       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0513 13:41:22.021386       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0513 13:41:22.021728       1 config.go:133] Starting endpoints config controller
I0513 13:41:22.021813       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0513 13:41:22.021874       1 config.go:315] Starting service config controller
I0513 13:41:22.021900       1 shared_informer.go:223] Waiting for caches to sync for service config
I0513 13:41:22.122030       1 shared_informer.go:230] Caches are synced for service config
I0513 13:41:22.122030       1 shared_informer.go:230] Caches are synced for endpoints config

==> kube-scheduler [45171cca5401] <==
I0513 13:47:54.475620       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0513 13:47:54.475678       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0513 13:47:54.946020       1 serving.go:313] Generated self-signed cert in-memory
W0513 13:47:57.920022       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0513 13:47:57.920040       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0513 13:47:57.920046       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0513 13:47:57.920049       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0513 13:47:57.935690       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0513 13:47:57.935704       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0513 13:47:57.936700       1 authorization.go:47] Authorization is disabled
W0513 13:47:57.936711       1 authentication.go:40] Authentication is disabled
I0513 13:47:57.936718       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0513 13:47:57.938037       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0513 13:47:57.939983       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0513 13:47:57.940029       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0513 13:47:57.940038       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0513 13:47:58.038911       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0513 13:47:58.040404       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0513 13:48:14.763483       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kube-scheduler [cf9a5808a6ee] <==
I0513 13:41:17.179962       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0513 13:41:17.180006       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0513 13:41:17.591172       1 serving.go:313] Generated self-signed cert in-memory
W0513 13:41:20.779203       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0513 13:41:20.782051       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0513 13:41:20.782071       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0513 13:41:20.782076       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0513 13:41:20.844274       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0513 13:41:20.844297       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0513 13:41:20.845663       1 authorization.go:47] Authorization is disabled
W0513 13:41:20.845678       1 authentication.go:40] Authentication is disabled
I0513 13:41:20.845685       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0513 13:41:20.847070       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0513 13:41:20.847162       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0513 13:41:20.847168       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0513 13:41:20.847193       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0513 13:41:20.947279       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0513 13:41:20.947323       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0513 13:41:39.021060       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0513 13:47:22.073139       1 reflector.go:380] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://172.17.0.3:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=1303&timeout=7m31s&timeoutSeconds=451&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused
E0513 13:47:22.073473       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://172.17.0.3:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1303&timeout=9m57s&timeoutSeconds=597&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused
E0513 13:47:22.073527       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://172.17.0.3:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1303&timeout=8m10s&timeoutSeconds=490&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused
E0513 13:47:22.073564       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://172.17.0.3:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2107&timeout=7m35s&timeoutSeconds=455&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused
E0513 13:47:22.073598       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://172.17.0.3:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1303&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused
E0513 13:47:22.073738       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://172.17.0.3:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=1303&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused
E0513 13:47:22.073776       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://172.17.0.3:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=1303&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused
E0513 13:47:22.073806       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://172.17.0.3:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1303&timeout=5m24s&timeoutSeconds=324&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused
E0513 13:47:22.073829       1 reflector.go:380] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://172.17.0.3:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=2094&timeoutSeconds=395&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused

==> kubelet <==
-- Logs begin at Wed 2020-05-13 13:47:37 UTC, end at Wed 2020-05-13 13:48:24 UTC. --
May 13 13:48:19 test kubelet[706]: E0513 13:48:19.587433     706 kubelet.go:2267] node "test" not found
May 13 13:48:19 test kubelet[706]: E0513 13:48:19.687706     706 kubelet.go:2267] node "test" not found
May 13 13:48:19 test kubelet[706]: E0513 13:48:19.787851     706 kubelet.go:2267] node "test" not found
May 13 13:48:19 test kubelet[706]: E0513 13:48:19.888298     706 kubelet.go:2267] node "test" not found
May 13 13:48:19 test kubelet[706]: E0513 13:48:19.988472     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.088679     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.188852     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.289059     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.389315     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.489615     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.589803     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.690080     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.790277     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.890551     706 kubelet.go:2267] node "test" not found
May 13 13:48:20 test kubelet[706]: E0513 13:48:20.990786     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.090970     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.191181     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.291476     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.391668     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.491946     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.592119     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.690672     706 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node info: node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.692261     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: W0513 13:48:21.743270     706 status_manager.go:556] Failed to get status for pod "etcd-test_kube-system(6faa35c64253a217ed2b083ff9c6366b)": Get https://172.17.0.3:8443/api/v1/namespaces/kube-system/pods/etcd-test: dial tcp 172.17.0.3:8443: connect: no route to host
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.743285     706 event.go:269] Unable to write event: 'Post https://172.17.0.3:8443/api/v1/namespaces/default/events: dial tcp 172.17.0.3:8443: connect: no route to host' (may retry after sleeping)
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.743528     706 kubelet_node_status.go:92] Unable to register node "test" with API server: Post https://172.17.0.3:8443/api/v1/nodes: dial tcp 172.17.0.3:8443: connect: no route to host
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.743599     706 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://172.17.0.3:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test?timeout=10s: dial tcp 172.17.0.3:8443: connect: no route to host
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.792687     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.892818     706 kubelet.go:2267] node "test" not found
May 13 13:48:21 test kubelet[706]: E0513 13:48:21.992997     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.093133     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.193290     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.293427     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.393579     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.493687     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.593743     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.693897     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.794131     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.894256     706 kubelet.go:2267] node "test" not found
May 13 13:48:22 test kubelet[706]: E0513 13:48:22.994392     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.094513     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.194656     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.294788     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.394935     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.495041     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.595125     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.695271     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.795701     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.895838     706 kubelet.go:2267] node "test" not found
May 13 13:48:23 test kubelet[706]: E0513 13:48:23.995996     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.096109     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.196215     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.296344     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.396459     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.496605     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.596716     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.696789     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: W0513 13:48:24.779861     706 status_manager.go:556] Failed to get status for pod "kube-apiserver-test_kube-system(112c60df9e36eeaf13a6dd3074765810)": Get https://172.17.0.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-test: dial tcp 172.17.0.3:8443: connect: no route to host
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.796888     706 kubelet.go:2267] node "test" not found
May 13 13:48:24 test kubelet[706]: E0513 13:48:24.897031     706 kubelet.go:2267] node "test" not found

==> storage-provisioner [171780a64b20] <==
E0513 13:47:22.074042       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=13, ErrCode=NO_ERROR, debug=""
E0513 13:47:22.074055       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=13, ErrCode=NO_ERROR, debug=""
E0513 13:47:22.074075       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=13, ErrCode=NO_ERROR, debug=""
E0513 13:47:22.074461       1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to watch *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=1303&timeoutSeconds=516&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0513 13:47:22.075451       1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to watch *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=1303&timeoutSeconds=401&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0513 13:47:22.075488       1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to watch *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=1303&timeoutSeconds=598&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused

@priyawadhwa priyawadhwa added the addon/storage-provisioner Issues relating to storage provisioner addon label May 19, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 17, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 16, 2020
@priyawadhwa
Copy link

Is anyone still seeing this bug with minikube version 1.13.1?

@priyawadhwa priyawadhwa added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 23, 2020
@andrewshilliday
Copy link

I still see this issue on v1.15.1

@medyagh medyagh changed the title Using containers[i].volumeMounts[j].subPath produces "no such file or directory" error persistentVolume: Using containers[i].volumeMounts[j].subPath produces "no such file or directory" error Feb 18, 2021
@villevaltonen
Copy link

villevaltonen commented Dec 4, 2021

Facing the same issue.

Details:

  • minikube: 1.24.0
  • Debian 11 (bullseye): Linux debian 5.10.0-9-amd64
  • Driver: docker

Error:
CreateContainerConfigError: Error: stat /tmp/hostpath-provisioner/kafka/zk-pvc-zk-0: no such file or directory

Although the directories are present:

docker@minikube:~$ ls -al tmp/hostpath-provisioner/kafka/            
total 20
drwxr-xr-x 5 docker docker 4096 Dec  4 09:12 .
drwxr-xr-x 3 docker docker 4096 Dec  4 09:13 ..
drwxr-xr-x 2 docker docker 4096 Dec  4 09:12 zk-pvc-zk-0
drwxr-xr-x 2 docker docker 4096 Dec  4 09:12 zk-pvc-zk-1
drwxr-xr-x 2 docker docker 4096 Dec  4 09:12 zk-pvc-zk-2

docker@minikube:~$ stat /tmp/hostpath-provisioner/kafka/zk-pvc-zk-0
  File: /tmp/hostpath-provisioner/kafka/zk-pvc-zk-0
  Size: 4096            Blocks: 8          IO Block: 4096   directory
Device: fe00h/65024d    Inode: 1325380     Links: 2
Access: (0777/drwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2021-12-04 09:17:46.688050850 +0000
Modify: 2021-12-04 09:16:29.598036927 +0000
Change: 2021-12-04 09:16:29.598036927 +0000
 Birth: -

Edit: This seems to be a specific issue for Docker driver, because the same manifests work with Virtualbox driver as is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon area/mount help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests