-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker on Oracle 7.4: KubeletNotReady & failed to find subsystem mount for required subsystem: pids #8797
Comments
It seems the minikube start successfully, however while use kubectl get nodes, its status is "NotReady", also after apply a deployment, the pod status is alwasy pending. |
This particular state error is interesting:
As well as these (possible red herring):
this points us toward the root cause of why kubelet can't schedule:
and finally the biggest red flag:
I think there may be something that we need to do with the Docker configuration on your host to make it compatible with running Kubernetes. Can you try |
@tstromberg there is no --force-systemd option for start command [jiekong@den03fyu ~]$ minikube start --force-systemd |
OK. Please upgrade to the latest version of minikube then. This problem was either fixed, or this flag should fix it:
I got confused because your bug report has both output from minikube v1.12.0 and v1.9.1. |
yes, after upgrade it could run minikube start --force-systemd, however it still failed. |
@tstromberg Any suggestions? |
1 similar comment
@tstromberg Any suggestions? |
What sort of failure are you seeing with the |
Hey @Lavie526 are you still seeing this issue? |
On your host, do you mind sharing the output of:
I suspect it may be missing. |
Hi @Lavie526 , I haven't heard back from you, I wonder if you still have this issue? I will close this issue for now but please feel free to reopen whenever you feel ready to provide more information. |
Steps to reproduce the issue:
1.minikube start --vm-driver=docker
2. 😄 minikube v1.12.0 on Oracle 7.4 (xen/amd64)
▪ KUBECONFIG=/scratch/jiekong/.kube/config
▪ MINIKUBE_HOME=/scratch/jiekong
✨ Using the docker driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating docker container (CPUs=2, Memory=14600MB) ...
🌐 Found network options:
▪ NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
▪ http_proxy=http://www-proxy-brmdc.us.*.com:80/
▪ https_proxy=http://www-proxy-brmdc.us.*.com:80/
▪ no_proxy=10.88.105.73,localhost,127.0.0.1,172.17.0.3
❗ This container is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
▪ env NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
▪ env HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/
▪ env HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/
▪ env NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
3.kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube NotReady master 2m35s v1.18.3
Full output of failed command:
[jiekong@den03fyu ~]$ minikube start --driver=docker --alsologtostderr
I0721 17:46:33.750189 37623 start.go:261] hostinfo: {"hostname":"den03fyu","uptime":1096853,"bootTime":1594281940,"procs":385,"os":"linux","platform":"oracle","platformFamily":"rhel","platformVersion":"7.4","kernelVersion":"4.1.12-124.39.5.1.el7uek.x86_64","virtualizationSystem":"xen","virtualizationRole":"guest","hostid":"502e3f0d-c118-48cd-ad65-83be8d0cb82f"}
I0721 17:46:33.751105 37623 start.go:271] virtualization: xen guest
😄 minikube v1.9.1 on Oracle 7.4 (xen/amd64)
▪ KUBECONFIG=/scratch/jiekong/.kube/config
▪ MINIKUBE_HOME=/scratch/jiekong
I0721 17:46:33.753667 37623 driver.go:246] Setting default libvirt URI to qemu:///system
✨ Using the docker driver based on user configuration
I0721 17:46:33.870056 37623 start.go:309] selected driver: docker
I0721 17:46:33.870105 37623 start.go:655] validating driver "docker" against
I0721 17:46:33.870126 37623 start.go:661] status for docker: {Installed:true Healthy:true Error: Fix: Doc:}
I0721 17:46:33.870159 37623 start.go:1098] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0721 17:46:33.978964 37623 start.go:1003] Using suggested 14600MB memory alloc based on sys=58702MB, container=58702MB
👍 Starting control plane node m01 in cluster minikube
🚜 Pulling base image ...
I0721 17:46:33.980817 37623 cache.go:104] Beginning downloading kic artifacts
I0721 17:46:33.980857 37623 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0721 17:46:33.980898 37623 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0721 17:46:33.980927 37623 preload.go:97] Found local preload: /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0721 17:46:33.981062 37623 cache.go:46] Caching tarball of preloaded images
I0721 17:46:33.981093 37623 preload.go:123] Found /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0721 17:46:33.981109 37623 cache.go:49] Finished downloading the preloaded tar for v1.18.0 on docker
I0721 17:46:33.981029 37623 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0721 17:46:33.981392 37623 profile.go:138] Saving config to /scratch/jiekong/.minikube/profiles/minikube/config.json ...
I0721 17:46:33.981530 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/config.json: {Name:mkeb6d736586eadd60342788b13e7e9947272373 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:34.075405 37623 image.go:90] Found gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 in local docker daemon, skipping pull
I0721 17:46:34.075649 37623 cache.go:117] Successfully downloaded all kic artifacts
I0721 17:46:34.075806 37623 start.go:260] acquiring machines lock for minikube: {Name:mkc0391c2630d5de37a791bd924e47ce04943c1a Clock:{} Delay:500ms Timeout:15m0s Cancel:}
I0721 17:46:34.076082 37623 start.go:264] acquired machines lock for "minikube" in 137.749µs
I0721 17:46:34.076221 37623 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:14600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 HTTP_PROXY=http://www-proxy-brmdc.us..com:80/ HTTPS_PROXY=http://www-proxy-brmdc.us..com:80/ NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0721 17:46:34.076465 37623 start.go:107] createHost starting for "m01" (driver="docker")
🔥 Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=14600MB (58702MB available) ...
I0721 17:46:34.193858 37623 start.go:143] libmachine.API.Create for "minikube" (driver="docker")
I0721 17:46:34.193921 37623 client.go:169] LocalClient.Create starting
I0721 17:46:34.194024 37623 main.go:110] libmachine: Reading certificate data from /scratch/jiekong/.minikube/certs/ca.pem
I0721 17:46:34.194088 37623 main.go:110] libmachine: Decoding PEM data...
I0721 17:46:34.194121 37623 main.go:110] libmachine: Parsing certificate...
I0721 17:46:34.194304 37623 main.go:110] libmachine: Reading certificate data from /scratch/jiekong/.minikube/certs/cert.pem
I0721 17:46:34.194356 37623 main.go:110] libmachine: Decoding PEM data...
I0721 17:46:34.194380 37623 main.go:110] libmachine: Parsing certificate...
I0721 17:46:34.194900 37623 oci.go:245] executing with [docker ps -a --format {{.Names}}] timeout: 15s
I0721 17:46:34.252062 37623 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0721 17:46:34.307297 37623 oci.go:128] Successfully created a docker volume minikube
I0721 17:46:35.082249 37623 oci.go:245] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 15s
I0721 17:46:35.148292 37623 oci.go:160] the created container "minikube" has a running status.
I0721 17:46:35.148638 37623 kic.go:142] Creating ssh key for kic: /scratch/jiekong/.minikube/machines/minikube/id_rsa...
I0721 17:46:35.704764 37623 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0721 17:46:35.763348 37623 client.go:172] LocalClient.Create took 1.569388902s
I0721 17:46:37.763915 37623 start.go:110] createHost completed in 3.687316658s
I0721 17:46:37.764235 37623 start.go:77] releasing machines lock for "minikube", held for 3.688032819s
🤦 StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: apply authorized_keys file ownership, output
** stderr **
Error response from daemon: Container 493c72f4fe54225f2fd2c660e11937cd756828923103f70312537e30d9035daf is not running
** /stderr **: chown docker:docker /home/docker/.ssh/authorized_keys: exit status 1
stdout:
stderr:
Error response from daemon: Container 493c72f4fe54225f2fd2c660e11937cd756828923103f70312537e30d9035daf is not running
I0721 17:46:37.765970 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
🔥 Deleting "minikube" in docker ...
I0721 17:46:42.967743 37623 start.go:260] acquiring machines lock for minikube: {Name:mkc0391c2630d5de37a791bd924e47ce04943c1a Clock:{} Delay:500ms Timeout:15m0s Cancel:}
I0721 17:46:42.968268 37623 start.go:264] acquired machines lock for "minikube" in 192.263µs
I0721 17:46:42.968485 37623 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:14600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 HTTP_PROXY=http://www-proxy-brmdc.us..com:80/ HTTPS_PROXY=http://www-proxy-brmdc.us..com:80/ NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0721 17:46:42.968803 37623 start.go:107] createHost starting for "m01" (driver="docker")
🔥 Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=14600MB (58702MB available) ...
I0721 17:46:43.088972 37623 start.go:143] libmachine.API.Create for "minikube" (driver="docker")
I0721 17:46:43.089076 37623 client.go:169] LocalClient.Create starting
I0721 17:46:43.089170 37623 main.go:110] libmachine: Reading certificate data from /scratch/jiekong/.minikube/certs/ca.pem
I0721 17:46:43.089240 37623 main.go:110] libmachine: Decoding PEM data...
I0721 17:46:43.089287 37623 main.go:110] libmachine: Parsing certificate...
I0721 17:46:43.089488 37623 main.go:110] libmachine: Reading certificate data from /scratch/jiekong/.minikube/certs/cert.pem
I0721 17:46:43.089573 37623 main.go:110] libmachine: Decoding PEM data...
I0721 17:46:43.089606 37623 main.go:110] libmachine: Parsing certificate...
I0721 17:46:43.089901 37623 oci.go:245] executing with [docker ps -a --format {{.Names}}] timeout: 15s
I0721 17:46:43.143202 37623 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0721 17:46:43.195154 37623 oci.go:128] Successfully created a docker volume minikube
I0721 17:46:43.841737 37623 oci.go:245] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 15s
I0721 17:46:43.913242 37623 oci.go:160] the created container "minikube" has a running status.
I0721 17:46:43.913428 37623 kic.go:142] Creating ssh key for kic: /scratch/jiekong/.minikube/machines/minikube/id_rsa...
I0721 17:46:44.507946 37623 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0721 17:46:44.716634 37623 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0721 17:46:44.716990 37623 preload.go:97] Found local preload: /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0721 17:46:44.717218 37623 kic.go:128] Starting extracting preloaded images to volume
I0721 17:46:44.717486 37623 volumes.go:85] executing: [docker run --rm --entrypoint /usr/bin/tar -v /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 -I lz4 -xvf /preloaded.tar -C /extractDir]
I0721 17:46:50.046119 37623 kic.go:133] Took 5.328921 seconds to extract preloaded images to volume
I0721 17:46:50.046360 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
I0721 17:46:50.104844 37623 machine.go:86] provisioning docker machine ...
I0721 17:46:50.105023 37623 ubuntu.go:166] provisioning hostname "minikube"
I0721 17:46:50.161118 37623 main.go:110] libmachine: Using SSH client type: native
I0721 17:46:50.161526 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:50.161719 37623 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0721 17:46:50.297031 37623 main.go:110] libmachine: SSH cmd err, output: : minikube
I0721 17:46:50.354730 37623 main.go:110] libmachine: Using SSH client type: native
I0721 17:46:50.355279 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:50.355503 37623 main.go:110] libmachine: About to run SSH command:
I0721 17:46:50.466315 37623 main.go:110] libmachine: SSH cmd err, output: :
I0721 17:46:50.466476 37623 ubuntu.go:172] set auth options {CertDir:/scratch/jiekong/.minikube CaCertPath:/scratch/jiekong/.minikube/certs/ca.pem CaPrivateKeyPath:/scratch/jiekong/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/scratch/jiekong/.minikube/machines/server.pem ServerKeyPath:/scratch/jiekong/.minikube/machines/server-key.pem ClientKeyPath:/scratch/jiekong/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/scratch/jiekong/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/scratch/jiekong/.minikube}
I0721 17:46:50.466662 37623 ubuntu.go:174] setting up certificates
I0721 17:46:50.466932 37623 provision.go:83] configureAuth start
I0721 17:46:50.543308 37623 provision.go:132] copyHostCerts
I0721 17:46:50.544049 37623 provision.go:106] generating server cert: /scratch/jiekong/.minikube/machines/server.pem ca-key=/scratch/jiekong/.minikube/certs/ca.pem private-key=/scratch/jiekong/.minikube/certs/ca-key.pem org=jiekong.minikube san=[172.17.0.2 localhost 127.0.0.1]
I0721 17:46:50.988368 37623 provision.go:160] copyRemoteCerts
I0721 17:46:51.081778 37623 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0721 17:46:51.136987 37623 ssh_runner.go:155] Checked if /etc/docker/ca.pem exists, but got error: Process exited with status 1
I0721 17:46:51.137497 37623 ssh_runner.go:174] Transferring 1038 bytes to /etc/docker/ca.pem
I0721 17:46:51.138772 37623 ssh_runner.go:193] ca.pem: copied 1038 bytes
I0721 17:46:51.163148 37623 ssh_runner.go:155] Checked if /etc/docker/server.pem exists, but got error: Process exited with status 1
I0721 17:46:51.163755 37623 ssh_runner.go:174] Transferring 1123 bytes to /etc/docker/server.pem
I0721 17:46:51.164840 37623 ssh_runner.go:193] server.pem: copied 1123 bytes
I0721 17:46:51.189267 37623 ssh_runner.go:155] Checked if /etc/docker/server-key.pem exists, but got error: Process exited with status 1
I0721 17:46:51.189752 37623 ssh_runner.go:174] Transferring 1679 bytes to /etc/docker/server-key.pem
I0721 17:46:51.190490 37623 ssh_runner.go:193] server-key.pem: copied 1679 bytes
I0721 17:46:51.212222 37623 provision.go:86] configureAuth took 745.1519ms
I0721 17:46:51.212350 37623 ubuntu.go:190] setting minikube options for container-runtime
I0721 17:46:51.270024 37623 main.go:110] libmachine: Using SSH client type: native
I0721 17:46:51.270463 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:51.270624 37623 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0721 17:46:51.384286 37623 main.go:110] libmachine: SSH cmd err, output: : overlay
I0721 17:46:51.384328 37623 ubuntu.go:71] root file system type: overlay
I0721 17:46:51.384531 37623 provision.go:295] Updating docker unit: /lib/systemd/system/docker.service ...
I0721 17:46:51.443643 37623 main.go:110] libmachine: Using SSH client type: native
I0721 17:46:51.444018 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:51.444235 37623 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
Environment="NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2"
Environment="HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/"
Environment="HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/"
Environment="NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3"
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0721 17:46:51.567443 37623 main.go:110] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
Environment=NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2
Environment=HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/
Environment=HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/
Environment=NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0721 17:46:51.629055 37623 main.go:110] libmachine: Using SSH client type: native
I0721 17:46:51.629298 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:51.629380 37623 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
I0721 17:46:52.145149 37623 main.go:110] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2020-07-22 00:46:51.564843610 +0000
@@ -8,24 +8,26 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+Environment=NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2
+Environment=HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/
+Environment=HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/
+Environment=NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +35,10 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
I0721 17:46:52.145225 37623 machine.go:89] provisioned docker machine in 2.040237341s
I0721 17:46:52.145239 37623 client.go:172] LocalClient.Create took 9.056130529s
I0721 17:46:52.145253 37623 start.go:148] libmachine.API.Create for "minikube" took 9.056292084s
I0721 17:46:52.145263 37623 start.go:189] post-start starting for "minikube" (driver="docker")
I0721 17:46:52.145281 37623 start.go:199] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0721 17:46:52.145298 37623 start.go:234] Returning KICRunner for "docker" driver
I0721 17:46:52.145447 37623 kic_runner.go:91] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0721 17:46:52.339777 37623 filesync.go:118] Scanning /scratch/jiekong/.minikube/addons for local assets ...
I0721 17:46:52.340130 37623 filesync.go:118] Scanning /scratch/jiekong/.minikube/files for local assets ...
I0721 17:46:52.340312 37623 start.go:192] post-start completed in 195.03773ms
I0721 17:46:52.341092 37623 start.go:110] createHost completed in 9.37208076s
I0721 17:46:52.341204 37623 start.go:77] releasing machines lock for "minikube", held for 9.37274675s
🌐 Found network options:
▪ NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2
▪ http_proxy=http://www-proxy-brmdc.us.*.com:80/
▪ https_proxy=http://www-proxy-brmdc.us.*.com:80/
▪ no_proxy=10.88.105.73,localhost,127.0.0.1,172.17.0.3
I0721 17:46:52.409980 37623 profile.go:138] Saving config to /scratch/jiekong/.minikube/profiles/minikube/config.json ...
I0721 17:46:52.410241 37623 kic_runner.go:91] Run: curl -sS -m 2 https://k8s.gcr.io/
I0721 17:46:52.410683 37623 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0721 17:46:52.738320 37623 kic_runner.go:91] Run: sudo systemctl stop -f containerd
I0721 17:46:53.038502 37623 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0721 17:46:53.291921 37623 kic_runner.go:91] Run: sudo systemctl is-active --quiet service crio
I0721 17:46:53.533211 37623 kic_runner.go:91] Run: sudo systemctl start docker
W0721 17:46:53.827858 37623 start.go:430] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: exit status 7
stdout:
stderr:
curl: (7) Failed to connect to k8s.gcr.io port 443: Connection timed out
❗ This container is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0721 17:46:54.194748 37623 kic_runner.go:91] Run: docker version --format {{.Server.Version}}
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
▪ env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2
▪ env HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/
▪ env HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/
▪ env NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3
▪ kubeadm.pod-network-cidr=10.244.0.0/16
I0721 17:46:54.519685 37623 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0721 17:46:54.519732 37623 preload.go:97] Found local preload: /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0721 17:46:54.519860 37623 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0721 17:46:54.832933 37623 docker.go:367] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0721 17:46:54.833353 37623 docker.go:305] Images already preloaded, skipping extraction
I0721 17:46:54.833561 37623 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0721 17:46:55.157820 37623 docker.go:367] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0721 17:46:55.159086 37623 cache_images.go:69] Images are preloaded, skipping loading
I0721 17:46:55.159243 37623 kubeadm.go:125] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:172.17.0.2}
I0721 17:46:55.159489 37623 kubeadm.go:129] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.2
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 172.17.0.2
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: 172.17.0.2:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metricsBindAddress: 172.17.0.2:10249
I0721 17:46:55.160738 37623 kic_runner.go:91] Run: docker info --format {{.CgroupDriver}}
I0721 17:46:55.456338 37623 kubeadm.go:649] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests
[Install]
config:
{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:}
I0721 17:46:55.457267 37623 kic_runner.go:91] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
I0721 17:46:55.710905 37623 binaries.go:42] Found k8s binaries, skipping transfer
I0721 17:46:55.711280 37623 kic_runner.go:91] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0721 17:46:56.893023 37623 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0721 17:46:57.099916 37623 kic_runner.go:91] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet"
I0721 17:46:57.452146 37623 certs.go:51] Setting up /scratch/jiekong/.minikube/profiles/minikube for IP: 172.17.0.2
I0721 17:46:57.452676 37623 certs.go:169] skipping minikubeCA CA generation: /scratch/jiekong/.minikube/ca.key
I0721 17:46:57.452995 37623 certs.go:169] skipping proxyClientCA CA generation: /scratch/jiekong/.minikube/proxy-client-ca.key
I0721 17:46:57.453238 37623 certs.go:267] generating minikube-user signed cert: /scratch/jiekong/.minikube/profiles/minikube/client.key
I0721 17:46:57.453386 37623 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/client.crt with IP's: []
I0721 17:46:57.613530 37623 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/client.crt ...
I0721 17:46:57.613626 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/client.crt: {Name:mk102f7d86706185740d9bc9a57fc1d55716aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:57.613862 37623 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/client.key ...
I0721 17:46:57.613895 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/client.key: {Name:mkef0a0f26fc07209d23f79940d16c45455b63f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:57.614058 37623 certs.go:267] generating minikube signed cert: /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.eaa33411
I0721 17:46:57.614102 37623 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.eaa33411 with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0721 17:46:57.850254 37623 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.eaa33411 ...
I0721 17:46:57.850325 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.eaa33411: {Name:mk723c191d10c2ebe7f83ef10c6921ca6c302446 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:57.850664 37623 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.eaa33411 ...
I0721 17:46:57.850695 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.eaa33411: {Name:mk68405f7f632b1f5980112bc4deb27222ae4de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:57.850813 37623 certs.go:278] copying /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.eaa33411 -> /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt
I0721 17:46:57.850931 37623 certs.go:282] copying /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.eaa33411 -> /scratch/jiekong/.minikube/profiles/minikube/apiserver.key
I0721 17:46:57.851071 37623 certs.go:267] generating aggregator signed cert: /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key
I0721 17:46:57.851097 37623 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0721 17:46:58.087194 37623 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt ...
I0721 17:46:58.087273 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt: {Name:mkd86cf3f7172f909cc9174e9befa523ad3f3568 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:58.087644 37623 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key ...
I0721 17:46:58.087683 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key: {Name:mk86f427bfbc5f46a12e1a6ff48f5514472dcc9b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:58.088100 37623 certs.go:330] found cert: ca-key.pem (1679 bytes)
I0721 17:46:58.088214 37623 certs.go:330] found cert: ca.pem (1038 bytes)
I0721 17:46:58.088271 37623 certs.go:330] found cert: cert.pem (1078 bytes)
I0721 17:46:58.088344 37623 certs.go:330] found cert: key.pem (1679 bytes)
I0721 17:46:58.089648 37623 certs.go:120] copying: /var/lib/minikube/certs/apiserver.crt
I0721 17:46:58.421966 37623 certs.go:120] copying: /var/lib/minikube/certs/apiserver.key
I0721 17:46:58.691424 37623 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.crt
I0721 17:46:59.005729 37623 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.key
I0721 17:46:59.308777 37623 certs.go:120] copying: /var/lib/minikube/certs/ca.crt
I0721 17:46:59.607846 37623 certs.go:120] copying: /var/lib/minikube/certs/ca.key
I0721 17:46:59.871866 37623 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.crt
I0721 17:47:00.150709 37623 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.key
I0721 17:47:00.446862 37623 certs.go:120] copying: /usr/share/ca-certificates/minikubeCA.pem
I0721 17:47:00.730471 37623 certs.go:120] copying: /var/lib/minikube/kubeconfig
I0721 17:47:01.036943 37623 kic_runner.go:91] Run: openssl version
I0721 17:47:01.235432 37623 kic_runner.go:91] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0721 17:47:01.506692 37623 kic_runner.go:91] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0721 17:47:01.750180 37623 certs.go:370] hashing: -rw-r--r-- 1 root root 1066 Jul 21 09:00 /usr/share/ca-certificates/minikubeCA.pem
I0721 17:47:01.750686 37623 kic_runner.go:91] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0721 17:47:02.050692 37623 kic_runner.go:91] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0721 17:47:02.330389 37623 kubeadm.go:278] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:14600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 HTTP_PROXY=http://www-proxy-brmdc.us..com:80/ HTTPS_PROXY=http://www-proxy-brmdc.us..com:80/ NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[]}
I0721 17:47:02.330974 37623 kic_runner.go:91] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0721 17:47:02.648189 37623 kic_runner.go:91] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0721 17:47:02.914215 37623 kic_runner.go:91] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0721 17:47:03.167207 37623 kubeadm.go:214] ignoring SystemVerification for kubeadm because of either driver or kubernetes version
I0721 17:47:03.167722 37623 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0721 17:47:03.439826 37623 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0721 17:47:03.709049 37623 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0721 17:47:03.968721 37623 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0721 17:47:04.226758 37623 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0721 17:47:25.204124 37623 kic_runner.go:118] Done: [docker exec --privileged minikube /bin/bash -c sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: (20.977117049s)
I0721 17:47:25.204593 37623 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create --kubeconfig=/var/lib/minikube/kubeconfig -f -
I0721 17:47:25.806501 37623 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl label nodes minikube.k8s.io/version=v1.9.1 minikube.k8s.io/commit=d8747aec7ebf8332ddae276d5f8fb42d3152b5a1 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_07_21T17_47_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0721 17:47:26.187492 37623 kic_runner.go:91] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0721 17:47:26.452060 37623 ops.go:35] apiserver oom_adj: -16
I0721 17:47:26.452480 37623 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0721 17:47:26.807505 37623 kubeadm.go:772] duration metric: took 355.179157ms to wait for elevateKubeSystemPrivileges.
I0721 17:47:26.807798 37623 kubeadm.go:280] StartCluster complete in 24.477425583s
I0721 17:47:26.807972 37623 settings.go:123] acquiring lock: {Name:mk6f220c874ab31ad6cc0cf9a6c90f7ab17dd518 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:47:26.808240 37623 settings.go:131] Updating kubeconfig: /scratch/jiekong/.kube/config
I0721 17:47:26.809659 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.kube/config: {Name:mk262b9661e6e96133150ac3387d626503976a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:47:26.810084 37623 addons.go:280] enableAddons start: toEnable=map[], additional=[]
🌟 Enabling addons: default-storageclass, storage-provisioner
I0721 17:47:26.813103 37623 addons.go:45] Setting default-storageclass=true in profile "minikube"
I0721 17:47:26.813258 37623 addons.go:230] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0721 17:47:26.816596 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
I0721 17:47:26.892845 37623 addons.go:104] Setting addon default-storageclass=true in "minikube"
W0721 17:47:26.893250 37623 addons.go:119] addon default-storageclass should already be in state true
I0721 17:47:26.893406 37623 host.go:65] Checking if "minikube" exists ...
I0721 17:47:26.895118 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
I0721 17:47:26.957651 37623 addons.go:197] installing /etc/kubernetes/addons/storageclass.yaml
I0721 17:47:27.249982 37623 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0721 17:47:27.633954 37623 addons.go:70] Writing out "minikube" config to set default-storageclass=true...
I0721 17:47:27.634580 37623 addons.go:45] Setting storage-provisioner=true in profile "minikube"
I0721 17:47:27.634938 37623 addons.go:104] Setting addon storage-provisioner=true in "minikube"
W0721 17:47:27.635181 37623 addons.go:119] addon storage-provisioner should already be in state true
I0721 17:47:27.635285 37623 host.go:65] Checking if "minikube" exists ...
I0721 17:47:27.635866 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
I0721 17:47:27.697770 37623 addons.go:197] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0721 17:47:28.018139 37623 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0721 17:47:28.445751 37623 addons.go:70] Writing out "minikube" config to set storage-provisioner=true...
I0721 17:47:28.446231 37623 addons.go:282] enableAddons completed in 1.636145178s
I0721 17:47:28.446437 37623 kverify.go:52] waiting for apiserver process to appear ...
I0721 17:47:28.446666 37623 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0721 17:47:28.692657 37623 kverify.go:72] duration metric: took 246.213612ms to wait for apiserver process to appear ...
I0721 17:47:28.695462 37623 kverify.go:187] waiting for apiserver healthz status ...
I0721 17:47:28.695686 37623 kverify.go:298] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0721 17:47:28.703982 37623 kverify.go:240] control plane version: v1.18.0
I0721 17:47:28.704048 37623 kverify.go:230] duration metric: took 8.369999ms to wait for apiserver health ...
I0721 17:47:28.704067 37623 kverify.go:150] waiting for kube-system pods to appear ...
I0721 17:47:28.714575 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:28.714636 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:29.217496 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:29.217621 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:29.717842 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:29.717929 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:30.218799 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:30.219228 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:30.717984 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:30.718085 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:31.217780 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:31.218100 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:31.720286 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:31.720347 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:32.219323 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:32.219404 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:32.718483 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:32.718754 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:33.217975 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:33.218198 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:33.725091 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:33.725322 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:34.218670 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:34.218978 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:34.718344 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:34.718396 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:35.218070 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:35.218270 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:35.717874 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:35.717912 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:36.217797 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:36.217836 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:36.718737 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:36.718970 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:37.217375 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:37.217605 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:37.719767 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:37.719814 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:38.217994 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:38.218255 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:38.717472 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:38.717714 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:39.218353 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:39.218646 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:39.718128 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:39.718383 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:40.217931 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:40.218158 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:40.718948 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:40.719036 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:41.220322 37623 kverify.go:168] 5 kube-system pods found
I0721 17:47:41.220417 37623 kverify.go:170] "coredns-66bff467f8-pxvdv" [979340b0-81c5-4619-979a-d35ecc7076f8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:41.220447 37623 kverify.go:170] "coredns-66bff467f8-xx8pk" [caed6688-147c-448c-9cce-847f585bfb9b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:41.220474 37623 kverify.go:170] "kindnet-gw9vd" [6fdae753-79bf-4de7-b041-883386a80c8b] Pending
I0721 17:47:41.220497 37623 kverify.go:170] "kube-proxy-pvzpl" [5b3ada32-95db-4d44-b556-0ad3bb486004] Pending
I0721 17:47:41.220563 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:41.220595 37623 kverify.go:181] duration metric: took 12.516496038s to wait for pod list to return data ...
🏄 Done! kubectl is now configured to use "minikube"
I0721 17:47:41.289129 37623 start.go:453] kubectl: 1.18.6, cluster: 1.18.0 (minor skew: 0)
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
a2ad4a56696dd 303ce5db0e90d About a minute ago Running etcd 0 ed92a60b2e454
a3ff7508f7baa 74060cea7f704 About a minute ago Running kube-apiserver 0 edc7a6cc15f33
9ceffe31db4f7 a31f78c7c8ce1 About a minute ago Running kube-scheduler 0 ce30a4b7e2db8
a533bff3e4a13 d3e55153f52fb About a minute ago Running kube-controller-manager 0 e36431ab435d4
==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=d8747aec7ebf8332ddae276d5f8fb42d3152b5a1
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_07_21T17_47_25_0700
minikube.k8s.io/version=v1.9.1
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 22 Jul 2020 00:47:22 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Wed, 22 Jul 2020 00:48:46 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Wed, 22 Jul 2020 00:48:46 +0000 Wed, 22 Jul 2020 00:47:17 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 22 Jul 2020 00:48:46 +0000 Wed, 22 Jul 2020 00:47:17 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 22 Jul 2020 00:48:46 +0000 Wed, 22 Jul 2020 00:47:17 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 22 Jul 2020 00:48:46 +0000 Wed, 22 Jul 2020 00:47:31 +0000 KubeletNotReady container runtime status check may not have completed yet
Addresses:
InternalIP: 172.17.0.2
Hostname: minikube
Capacity:
cpu: 16
ephemeral-storage: 804139352Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 60111844Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 804139352Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 60111844Ki
pods: 110
System Info:
Machine ID: d5e93611b2854dd3ac5c4998c9ef3654
System UUID: c3ad8db6-b587-4ede-acd3-2a224d152926
Boot ID: 55a28076-973d-4fd3-9b32-b25e77bad388
Kernel Version: 4.1.12-124.39.5.1.el7uek.x86_64
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.18.0
Kube-Proxy Version: v1.18.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
kube-system kindnet-gw9vd 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 72s
kube-system kube-proxy-pvzpl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 72s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 100m (0%) 100m (0%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
Normal NodeHasSufficientMemory 97s (x4 over 97s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 97s (x4 over 97s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 97s (x4 over 97s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 81s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 81s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 81s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 81s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeNotReady 81s kubelet, minikube Node minikube status is now: NodeNotReady
Normal Starting 74s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 73s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 73s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 73s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 66s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 66s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 66s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 66s kubelet, minikube Starting kubelet.
Normal Starting 59s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 59s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 59s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 59s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 51s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 51s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 51s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 51s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 44s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 44s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 44s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 44s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 36s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 36s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 36s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 36s kubelet, minikube Starting kubelet.
Normal Starting 29s kubelet, minikube Starting kubelet.
Normal NodeHasNoDiskPressure 28s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 28s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 28s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 21s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 21s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 21s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 14s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 13s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 6s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 6s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
==> dmesg <==
[Jul18 23:37] systemd-fstab-generator[40881]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul19 05:37] systemd-fstab-generator[36344]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul19 11:37] systemd-fstab-generator[34144]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul19 17:37] systemd-fstab-generator[28939]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul19 23:37] systemd-fstab-generator[24785]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 01:35] systemd-fstab-generator[65022]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:22] systemd-fstab-generator[110191]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:23] systemd-fstab-generator[110364]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:24] systemd-fstab-generator[110483]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +2.796808] systemd-fstab-generator[110870]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +18.140954] systemd-fstab-generator[112205]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:26] systemd-fstab-generator[117323]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:37] systemd-fstab-generator[123141]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 11:37] systemd-fstab-generator[4023]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 17:37] systemd-fstab-generator[63123]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 23:37] systemd-fstab-generator[31942]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 05:37] systemd-fstab-generator[5139]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:28] systemd-fstab-generator[84894]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:30] systemd-fstab-generator[85161]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +13.594292] systemd-fstab-generator[85297]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +6.794363] systemd-fstab-generator[85367]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +1.429949] systemd-fstab-generator[85572]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +2.556154] systemd-fstab-generator[85950]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:32] systemd-fstab-generator[89986]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:35] systemd-fstab-generator[95248]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +7.413234] systemd-fstab-generator[95589]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +1.795417] systemd-fstab-generator[95786]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +2.981647] systemd-fstab-generator[96146]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:37] systemd-fstab-generator[100059]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:41] systemd-fstab-generator[106619]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +27.338319] systemd-fstab-generator[107758]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +15.088659] systemd-fstab-generator[108197]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:42] systemd-fstab-generator[108448]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:44] systemd-fstab-generator[111004]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:46] systemd-fstab-generator[112673]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:48] systemd-fstab-generator[115320]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:56] systemd-fstab-generator[122877]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:57] systemd-fstab-generator[123507]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:58] systemd-fstab-generator[127690]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:07] systemd-fstab-generator[30225]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:09] systemd-fstab-generator[30698]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +4.108791] systemd-fstab-generator[31109]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +19.365822] systemd-fstab-generator[31768]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:16] systemd-fstab-generator[38093]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:19] systemd-fstab-generator[39833]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +3.431086] systemd-fstab-generator[40246]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:21] systemd-fstab-generator[42489]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:25] systemd-fstab-generator[46138]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:27] systemd-fstab-generator[48231]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:49] systemd-fstab-generator[67085]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:50] systemd-fstab-generator[73846]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:51] systemd-fstab-generator[75593]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +54.567049] systemd-fstab-generator[81482]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:52] systemd-fstab-generator[81819]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +3.618974] systemd-fstab-generator[82220]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:54] systemd-fstab-generator[86445]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:57] systemd-fstab-generator[92167]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 11:37] systemd-fstab-generator[4093]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 17:37] systemd-fstab-generator[78547]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 23:37] systemd-fstab-generator[38578]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
==> etcd [a2ad4a56696d] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-07-22 00:47:16.782847 I | etcdmain: etcd Version: 3.4.3
2020-07-22 00:47:16.782918 I | etcdmain: Git SHA: 3cf2f69b5
2020-07-22 00:47:16.782925 I | etcdmain: Go Version: go1.12.12
2020-07-22 00:47:16.782930 I | etcdmain: Go OS/Arch: linux/amd64
2020-07-22 00:47:16.782942 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-07-22 00:47:16.783077 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-07-22 00:47:16.784117 I | embed: name = minikube
2020-07-22 00:47:16.784136 I | embed: data dir = /var/lib/minikube/etcd
2020-07-22 00:47:16.784143 I | embed: member dir = /var/lib/minikube/etcd/member
2020-07-22 00:47:16.784149 I | embed: heartbeat = 100ms
2020-07-22 00:47:16.784155 I | embed: election = 1000ms
2020-07-22 00:47:16.784160 I | embed: snapshot count = 10000
2020-07-22 00:47:16.784179 I | embed: advertise client URLs = https://172.17.0.2:2379
2020-07-22 00:47:16.790699 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
raft2020/07/22 00:47:16 INFO: b8e14bda2255bc24 switched to configuration voters=()
raft2020/07/22 00:47:16 INFO: b8e14bda2255bc24 became follower at term 0
raft2020/07/22 00:47:16 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/07/22 00:47:16 INFO: b8e14bda2255bc24 became follower at term 1
raft2020/07/22 00:47:16 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-07-22 00:47:16.794357 W | auth: simple token is not cryptographically signed
2020-07-22 00:47:16.795917 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-07-22 00:47:16.796230 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/07/22 00:47:16 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-07-22 00:47:16.796895 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
2020-07-22 00:47:16.798369 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-07-22 00:47:16.798609 I | embed: listening for metrics on http://127.0.0.1:2381
2020-07-22 00:47:16.798714 I | embed: listening for peers on 172.17.0.2:2380
raft2020/07/22 00:47:17 INFO: b8e14bda2255bc24 is starting a new election at term 1
raft2020/07/22 00:47:17 INFO: b8e14bda2255bc24 became candidate at term 2
raft2020/07/22 00:47:17 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
raft2020/07/22 00:47:17 INFO: b8e14bda2255bc24 became leader at term 2
raft2020/07/22 00:47:17 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-07-22 00:47:17.692403 I | etcdserver: setting up the initial cluster version to 3.4
2020-07-22 00:47:17.693010 N | etcdserver/membership: set the initial cluster version to 3.4
2020-07-22 00:47:17.693141 I | etcdserver/api: enabled capabilities for version 3.4
2020-07-22 00:47:17.693223 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-07-22 00:47:17.693366 I | embed: ready to serve client requests
2020-07-22 00:47:17.695096 I | embed: serving client requests on 127.0.0.1:2379
2020-07-22 00:47:17.695716 I | embed: ready to serve client requests
2020-07-22 00:47:17.700130 I | embed: serving client requests on 172.17.0.2:2379
==> kernel <==
00:48:53 up 12 days, 16:43, 0 users, load average: 0.14, 0.41, 0.50
Linux minikube 4.1.12-124.39.5.1.el7uek.x86_64 #2 SMP Tue Jun 9 20:03:37 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"
==> kube-apiserver [a3ff7508f7ba] <==
W0722 00:47:19.451113 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0722 00:47:19.463096 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0722 00:47:19.480464 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0722 00:47:19.484036 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0722 00:47:19.498842 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0722 00:47:19.520167 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0722 00:47:19.520240 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0722 00:47:19.532063 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0722 00:47:19.532087 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0722 00:47:19.533978 1 client.go:361] parsed scheme: "endpoint"
I0722 00:47:19.534019 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0722 00:47:19.543720 1 client.go:361] parsed scheme: "endpoint"
I0722 00:47:19.543772 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0722 00:47:22.105085 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0722 00:47:22.105254 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0722 00:47:22.105478 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0722 00:47:22.106170 1 secure_serving.go:178] Serving securely on [::]:8443
I0722 00:47:22.106239 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0722 00:47:22.106252 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0722 00:47:22.106274 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0722 00:47:22.106828 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0722 00:47:22.106861 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0722 00:47:22.107038 1 available_controller.go:387] Starting AvailableConditionController
I0722 00:47:22.107072 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0722 00:47:22.107101 1 autoregister_controller.go:141] Starting autoregister controller
I0722 00:47:22.107108 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0722 00:47:22.107143 1 crd_finalizer.go:266] Starting CRDFinalizer
I0722 00:47:22.107216 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0722 00:47:22.107259 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0722 00:47:22.107626 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0722 00:47:22.107643 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0722 00:47:22.107662 1 controller.go:86] Starting OpenAPI controller
I0722 00:47:22.107679 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0722 00:47:22.107703 1 naming_controller.go:291] Starting NamingConditionController
I0722 00:47:22.107718 1 establishing_controller.go:76] Starting EstablishingController
I0722 00:47:22.107734 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0722 00:47:22.107764 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0722 00:47:22.107930 1 controller.go:81] Starting OpenAPI AggregationController
E0722 00:47:22.111021 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg:
I0722 00:47:22.206795 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0722 00:47:22.207082 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0722 00:47:22.207655 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0722 00:47:22.207673 1 cache.go:39] Caches are synced for autoregister controller
I0722 00:47:22.208346 1 shared_informer.go:230] Caches are synced for crd-autoregister
I0722 00:47:23.105219 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0722 00:47:23.105281 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0722 00:47:23.112719 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0722 00:47:23.118328 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0722 00:47:23.118352 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0722 00:47:23.533180 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0722 00:47:23.571861 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0722 00:47:23.714575 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0722 00:47:23.715525 1 controller.go:606] quota admission added evaluator for: endpoints
I0722 00:47:23.719353 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0722 00:47:24.919197 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0722 00:47:24.937843 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0722 00:47:25.016482 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0722 00:47:25.147428 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0722 00:47:40.922859 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0722 00:47:40.947334 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
==> kube-controller-manager [a533bff3e4a1] <==
I0722 00:47:39.837958 1 horizontal.go:169] Starting HPA controller
I0722 00:47:39.837968 1 shared_informer.go:223] Waiting for caches to sync for HPA
I0722 00:47:40.088988 1 controllermanager.go:533] Started "cronjob"
I0722 00:47:40.089069 1 cronjob_controller.go:97] Starting CronJob Manager
I0722 00:47:40.238112 1 controllermanager.go:533] Started "csrapproving"
I0722 00:47:40.238172 1 certificate_controller.go:119] Starting certificate controller "csrapproving"
I0722 00:47:40.238202 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving
I0722 00:47:40.388195 1 controllermanager.go:533] Started "csrcleaner"
I0722 00:47:40.388269 1 cleaner.go:82] Starting CSR cleaner controller
I0722 00:47:40.638484 1 controllermanager.go:533] Started "daemonset"
I0722 00:47:40.641533 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0722 00:47:40.642892 1 daemon_controller.go:257] Starting daemon sets controller
I0722 00:47:40.642904 1 shared_informer.go:223] Waiting for caches to sync for daemon sets
I0722 00:47:40.657390 1 shared_informer.go:223] Waiting for caches to sync for resource quota
W0722 00:47:40.675677 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0722 00:47:40.675737 1 shared_informer.go:230] Caches are synced for service account
I0722 00:47:40.686430 1 shared_informer.go:230] Caches are synced for node
I0722 00:47:40.686457 1 range_allocator.go:172] Starting range CIDR allocator
I0722 00:47:40.686464 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
I0722 00:47:40.686472 1 shared_informer.go:230] Caches are synced for cidrallocator
I0722 00:47:40.699297 1 shared_informer.go:230] Caches are synced for PV protection
I0722 00:47:40.704699 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0722 00:47:40.721371 1 shared_informer.go:230] Caches are synced for expand
I0722 00:47:40.738829 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0722 00:47:40.746942 1 shared_informer.go:230] Caches are synced for namespace
I0722 00:47:40.768962 1 shared_informer.go:230] Caches are synced for TTL
I0722 00:47:40.842276 1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0722 00:47:40.892581 1 shared_informer.go:230] Caches are synced for attach detach
I0722 00:47:40.919114 1 shared_informer.go:230] Caches are synced for endpoint_slice
I0722 00:47:40.919401 1 shared_informer.go:230] Caches are synced for deployment
I0722 00:47:40.925718 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"43945934-1ceb-4590-a553-b40467760a6a", APIVersion:"apps/v1", ResourceVersion:"183", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
I0722 00:47:40.932962 1 shared_informer.go:230] Caches are synced for ReplicaSet
I0722 00:47:40.938191 1 shared_informer.go:230] Caches are synced for HPA
I0722 00:47:40.938639 1 shared_informer.go:230] Caches are synced for GC
I0722 00:47:40.938829 1 shared_informer.go:230] Caches are synced for stateful set
I0722 00:47:40.939860 1 shared_informer.go:230] Caches are synced for endpoint
I0722 00:47:40.940136 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"03512eb8-eec4-4832-adbf-4fa50787af11", APIVersion:"apps/v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-pxvdv
I0722 00:47:40.943176 1 shared_informer.go:230] Caches are synced for daemon sets
I0722 00:47:40.945835 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"03512eb8-eec4-4832-adbf-4fa50787af11", APIVersion:"apps/v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-xx8pk
I0722 00:47:40.955743 1 shared_informer.go:230] Caches are synced for PVC protection
I0722 00:47:40.957490 1 shared_informer.go:230] Caches are synced for taint
I0722 00:47:40.957608 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
W0722 00:47:40.957701 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0722 00:47:40.957747 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0722 00:47:40.957947 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0722 00:47:40.958074 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"989bec6a-8f83-47f4-bcc1-28e33282fd7f", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0722 00:47:40.964962 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"44cb4030-303f-459c-8e50-509e4dabe85c", APIVersion:"apps/v1", ResourceVersion:"194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-pvzpl
I0722 00:47:40.968310 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"007b5a9c-bd3b-41ae-99d1-1bb8d25c4abe", APIVersion:"apps/v1", ResourceVersion:"217", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-gw9vd
I0722 00:47:40.982993 1 shared_informer.go:230] Caches are synced for job
I0722 00:47:40.989069 1 shared_informer.go:230] Caches are synced for persistent volume
I0722 00:47:41.038479 1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0722 00:47:41.090358 1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0722 00:47:41.241861 1 shared_informer.go:230] Caches are synced for garbage collector
I0722 00:47:41.257754 1 shared_informer.go:230] Caches are synced for resource quota
I0722 00:47:41.282085 1 shared_informer.go:230] Caches are synced for disruption
I0722 00:47:41.282121 1 disruption.go:339] Sending events to api server.
I0722 00:47:41.291930 1 shared_informer.go:230] Caches are synced for resource quota
I0722 00:47:41.296429 1 shared_informer.go:230] Caches are synced for ReplicationController
I0722 00:47:41.331643 1 shared_informer.go:230] Caches are synced for garbage collector
I0722 00:47:41.331679 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
==> kube-scheduler [9ceffe31db4f] <==
I0722 00:47:16.705924 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0722 00:47:16.706003 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0722 00:47:17.537458 1 serving.go:313] Generated self-signed cert in-memory
W0722 00:47:22.191466 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0722 00:47:22.191500 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0722 00:47:22.191536 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0722 00:47:22.191548 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0722 00:47:22.211455 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0722 00:47:22.211541 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0722 00:47:22.214253 1 authorization.go:47] Authorization is disabled
W0722 00:47:22.214268 1 authentication.go:40] Authentication is disabled
I0722 00:47:22.214306 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0722 00:47:22.216343 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0722 00:47:22.218283 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0722 00:47:22.218304 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0722 00:47:22.218518 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0722 00:47:22.218856 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0722 00:47:22.219827 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0722 00:47:22.220185 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0722 00:47:22.220246 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0722 00:47:22.220340 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0722 00:47:22.220502 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0722 00:47:22.220780 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0722 00:47:22.283460 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0722 00:47:22.283820 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0722 00:47:22.284134 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0722 00:47:22.284308 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0722 00:47:22.284433 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0722 00:47:22.284516 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0722 00:47:22.284595 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0722 00:47:22.286121 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0722 00:47:22.288660 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0722 00:47:22.291462 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0722 00:47:22.293132 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0722 00:47:24.818494 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0722 00:47:25.517009 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0722 00:47:25.534170 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0722 00:47:40.963390 1 factory.go:503] pod: kube-system/coredns-66bff467f8-pxvdv is already present in the active queue
E0722 00:47:40.978216 1 factory.go:503] pod: kube-system/coredns-66bff467f8-xx8pk is already present in the active queue
==> kubelet <==
-- Logs begin at Wed 2020-07-22 00:46:44 UTC, end at Wed 2020-07-22 00:48:54 UTC. --
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.462346 5678 server.go:837] Client rotation is on, will bootstrap in background
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.464837 5678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.532836 5678 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.533939 5678 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.534164 5678 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.534415 5678 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.534453 5678 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.534463 5678 container_manager_linux.go:306] Creating device plugin manager: true
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.534587 5678 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.534637 5678 client.go:92] Start docker client with request timeout=2m0s
Jul 22 00:48:47 minikube kubelet[5678]: W0722 00:48:47.542133 5678 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.542210 5678 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 22 00:48:47 minikube kubelet[5678]: W0722 00:48:47.549217 5678 plugins.go:193] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.549310 5678 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.557083 5678 docker_service.go:258] Docker Info: &{ID:6DYC:PHGN:XMRX:RV4O:MGUA:OIPS:K5DQ:A3TE:W4NO:J7TC:5JFV:5GUY Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2020-07-22T00:48:47.550299286Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.1.12-124.39.5.1.el7uek.x86_64 OperatingSystem:Ubuntu 19.10 (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002a6380 NCPU:16 MemTotal:61554528256 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy:http://www-proxy-brmdc.us.*.com:80/ HTTPSProxy:http://www-proxy-brmdc.us.*.com:80/ NoProxy:10.88.105.73,localhost,127.0.0.1,172.17.0.3 Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ff48f57fc83a8c44cf4ad5d672424a98ba37ded6 Expected:ff48f57fc83a8c44cf4ad5d672424a98ba37ded6} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled]}
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.557184 5678 docker_service.go:271] Setting cgroupDriver to cgroupfs
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573498 5678 remote_runtime.go:59] parsed scheme: ""
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573526 5678 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573590 5678 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573608 5678 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573673 5678 remote_image.go:50] parsed scheme: ""
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573684 5678 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573697 5678 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573706 5678 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573746 5678 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 22 00:48:47 minikube kubelet[5678]: I0722 00:48:47.573789 5678 kubelet.go:317] Watching apiserver
Jul 22 00:48:53 minikube kubelet[5678]: E0722 00:48:53.887536 5678 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Jul 22 00:48:53 minikube kubelet[5678]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.896081 5678 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.2, apiVersion: 1.40.0
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.896602 5678 server.go:1125] Started kubelet
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.896857 5678 server.go:145] Starting to listen on 0.0.0.0:10250
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.898310 5678 server.go:393] Adding debug handlers to kubelet server.
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.899129 5678 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.899215 5678 volume_manager.go:265] Starting Kubelet Volume Manager
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.899364 5678 desired_state_of_world_populator.go:139] Desired state populator starts to run
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.913131 5678 clientconn.go:106] parsed scheme: "unix"
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.913158 5678 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.913297 5678 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.913316 5678 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.922661 5678 status_manager.go:158] Starting to sync pod status with apiserver
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.922915 5678 kubelet.go:1821] Starting kubelet main sync loop.
Jul 22 00:48:53 minikube kubelet[5678]: E0722 00:48:53.923204 5678 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.999487 5678 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Jul 22 00:48:53 minikube kubelet[5678]: I0722 00:48:53.999525 5678 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 10.244.0.0/24
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:53.999779 5678 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:53.999953 5678 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
Jul 22 00:48:54 minikube kubelet[5678]: E0722 00:48:54.023889 5678 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.029649 5678 kubelet_node_status.go:70] Attempting to register node minikube
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.089371 5678 kubelet_node_status.go:112] Node minikube was previously registered
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.089485 5678 kubelet_node_status.go:73] Successfully registered node minikube
Jul 22 00:48:54 minikube kubelet[5678]: E0722 00:48:54.224093 5678 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.224712 5678 cpu_manager.go:184] [cpumanager] starting with none policy
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.224733 5678 cpu_manager.go:185] [cpumanager] reconciling every 10s
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.224753 5678 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.224917 5678 state_mem.go:88] [cpumanager] updated default cpuset: ""
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.224932 5678 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Jul 22 00:48:54 minikube kubelet[5678]: I0722 00:48:54.224945 5678 policy_none.go:43] [cpumanager] none policy: Start
Jul 22 00:48:54 minikube kubelet[5678]: F0722 00:48:54.225979 5678 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 22 00:48:54 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Jul 22 00:48:54 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
The text was updated successfully, but these errors were encountered: