-
Notifications
You must be signed in to change notification settings - Fork 713
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG REPORT:kubelet cgroup driver #639
Comments
I can confirm this. @lavender2020 You need to manually append The default driver that the |
Most people turn to @luxas Shall we add a prefilght check on the consistency of cgroup driver between But if so, we may need to acquire root privilege to take on these changes. |
I hit this same issue with kubeadm kubelet is using --cgroup-driver=systemd
docker info | grep -i cgroup
kubelet logs
Version Info:
|
@dkirrane Have you reloaded the Run |
This issue was not fixed on 1.9.3 Version Info:
|
@gades What's your cgroup driver? $ docker info | grep -i cgroup |
Having the same problem.
Is there somewhere else that Kubelet is getting the cgroupfs driver directive? |
@mas-dse-greina Please refer to the solution in my comment. |
@dixudx Even after appending the This is the latest file, PS: It got fixed. After restarting the daemon and kubelet, I've used kubeadm init --pod-network-cidr=10.244.0.0/16 |
Yes. I am finding the same thing. Appending the --cgroup-driver=systemd
doesn't seem to have any effect. I have restarted the service and even
rebooted the computer.
It seems like the behavior is just on this one machine. I have been
successful with 4 other machines, but this one just doesn't seem to want to
join the cluster.
…-Tony
On Thu, Mar 1, 2018 at 11:44 AM, srinivas491-oneconvergence < ***@***.***> wrote:
@dixudx <https://github.com/dixudx> Even after appending the
--cgroup-driver=systemd to /etc/systemd/system/kubelet.
service.d/10-kubeadm.conf the problem still persists.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#639 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AVReEuQHJR80-8J4VLvACnGt1lTjEbYrks5taE-BgaJpZM4RSs0P>
.
|
After you change the unit file you need to FWIW this is defaulted in the RPMs but not in .debs. Is there any current distribution in main support that doesn't default to systemd now? /assign @detiber |
I hit this same issue with kubeadm v1.9.3 and v1.9.4. Start kubelet with --cgroup-driver=systemd $ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS Reload service $ systemctl daemon-reload
$ systemctl restart kubelet Check docker info $ docker info |grep -i cgroup
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Cgroup Driver: systemd kubelet logs $ kubelet logs
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" Version Info $ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} $ kubelet --version
Kubernetes v1.9.3 $ docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64 $ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core) |
@FrostyLeaf Can you look at the command line of running kubelet to see if cgroup driver is specified there? Something like |
@bart0sh This is it: $ ps aux |grep /bin/kubelet
root 13025 0.0 0.0 112672 980 pts/4 S+ 01:49 0:00 grep --color=auto /bin/kubelet
root 30495 4.5 0.6 546152 76924 ? Ssl 00:14 4:22 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=systemd --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki --fail-swap-on=false |
@FrostyLeaf Thank you! I could reproduce this as well. Seems to be a bug. Looking at it. As a temporary workaround you can switch docker and kubelet to cgroupfs driver. It should work. |
@bart0sh Fine. Thanks a lot. I'll try that. |
Same here. Context Host=CentOS 7.4, Guest=VirtualBox=Version 5.2.8 r121009 (Qt5.6.1)[root@kubernetes ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core) [root@kubernetes ~]# kubelet --version
Kubernetes v1.9.4 [root@kubernetes ~]# docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Experimental: false [root@kubernetes ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4", GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean", BuildDate:"2018-03-12T16:21:35Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} docker Cgroup is systemd [root@kubernetes ~]# docker info | grep Cgroup
WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd kubelet.service started with the Cgroup=systemd[root@kubernetes ~]# grep cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice" systemctl reload & restart kubelet service[root@kubernetes ~]# systemctl daemon-reload
[root@kubernetes ~]# systemctl stop kubelet.service
[root@kubernetes ~]# systemctl start kubelet.service kubelet logs[root@kubernetes ~]# kubelet logs
I0318 02:07:10.006151 29652 feature_gate.go:226] feature gates: &{{} map[]}
I0318 02:07:10.006310 29652 controller.go:114] kubelet config controller: starting controller
I0318 02:07:10.006315 29652 controller.go:118] kubelet config controller: validating combination of defaults and flags
I0318 02:07:10.018880 29652 server.go:182] Version: v1.9.4
I0318 02:07:10.018986 29652 feature_gate.go:226] feature gates: &{{} map[]}
I0318 02:07:10.019118 29652 plugins.go:101] No cloud provider specified.
W0318 02:07:10.019239 29652 server.go:328] standalone mode, no API client
W0318 02:07:10.068650 29652 **server.go:236] No api server defined - no events will be sent to API server.**
I0318 02:07:10.068670 29652 **server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /**
I0318 02:07:10.069130 29652 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
I0318 02:07:10.069306 29652 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
I0318 02:07:10.069404 29652 container_manager_linux.go:266] Creating device plugin manager: false
W0318 02:07:10.072836 29652 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0318 02:07:10.072860 29652 kubelet.go:576] Hairpin mode set to "hairpin-veth"
I0318 02:07:10.075139 29652 client.go:80] Connecting to docker on unix:///var/run/docker.sock
I0318 02:07:10.075156 29652 client.go:109] Start docker client with request timeout=2m0s
I0318 02:07:10.080336 29652 docker_service.go:232] Docker cri networking managed by kubernetes.io/no-op
I0318 02:07:10.090943 29652 docker_service.go:237] Docker Info: &{ID:DUEI:P7Y3:JKGP:XJDI:UFXG:NAOX:K7ID:KHCF:PCGW:46QA:TQZB:WEXF Containers:18 ContainersRunning:17 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:89 OomKillDisable:true NGoroutines:98 SystemTime:2018-03-18T02:07:10.083543475+01:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-693.21.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc42027b810 NCPU:2 MemTotal:2097364992 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes.master Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]} docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc4202a8f00} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:N/A Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:N/A Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
**error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"** kube processes running[root@kubernetes ~]# ps aux | grep -i kube
root 10182 0.4 1.2 54512 25544 ? Ssl mars17 1:10 kube-scheduler --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.conf --address=127.0.0.1
root 10235 1.8 12.7 438004 261948 ? Ssl mars17 4:44 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --allow-privileged=true --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-allowed-names=front-proxy-client --service-account-key-file=/etc/kubernetes/pki/sa.pub --client-ca-file=/etc/kubernetes/pki/ca.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-username-headers=X-Remote-User --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --insecure-port=0 --enable-bootstrap-token-auth=true --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --secure-port=6443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.70 --service-cluster-ip-range=10.96.0.0/12 --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379
root 10421 0.1 1.0 52464 22052 ? Ssl mars17 0:20 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
root 12199 1.7 8.5 326552 174108 ? Ssl mars17 4:11 kube-controller-manager --address=127.0.0.1 --leader-elect=true --controllers=*,bootstrapsigner,tokencleaner --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --use-service-account-credentials=true --kubeconfig=/etc/kubernetes/controller-manager.conf --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key
root 22928 0.0 1.0 279884 20752 ? Sl 01:10 0:00 /home/weave/weaver --port=6783 --datapath=datapath --name=fe:9b:da:25:e2:b2 --host-root=/host --http-addr=127.0.0.1:6784 --status-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=10.32.0.0/12 --nickname=kubernetes.master --ipalloc-init consensus=1 --conn-limit=30 --expect-npc 192.168.1.70
root 23308 0.0 0.7 38936 15340 ? Ssl 01:10 0:01 /kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2
65534 23443 0.0 0.8 37120 18028 ? Ssl 01:10 0:03 /sidecar --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
root 29547 1.6 2.9 819012 61196 ? Ssl 02:07 0:22 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki |
v1.9.5 fixed this issue, awesome!@bart0sh |
@FrostyLeaf I'm still able to reproduce it with 1.9.5: $ rpm -qa |grep kube $ docker info 2>/dev/null |grep -i cgroup $ ps aux |grep cgroup-driver I0321 13:50:29.901008 30817 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s} Are you still using systemd cgroup driver? |
I propose to close this issue I've observed 2 reasons that cause most of the reports here:
I tested --cgroup-driver=systemd option with kubelet 1.8.0, 1.9.0, 1.9.3 and 1.9.5. There were no error messages "cgroupfs is different from docker cgroup driver: systemd" in the logs. |
@timothysc There are no objections regarding my last comment. Can you close this issue, please? It's not a bug, as it's caused by a lack of knowledge about kubelet and/or systemd. 2 things that might make sense to do from my point of view are:
We may want to consider creating separate issues for those. Anyway, this issue can be closed. |
Things look fine for me thanks to the v1.9.5. Agree with @bart0sh about the init checking the cgroup driver consistency between kubelet and docker. Just my 2cts. |
Hi, I m having the same issue. when I run I did run **systemctl daemon-reload ** and systemctl restart kubelet, but it still shown
Another weird thing is : when I ran
I cannot figure out the problem. |
@moqichenle That's strange. It should work. Can you show the output of the following commands?
Here is what I see on my system:
|
@bart0sh Hi, thank you for the help. After typing command kubeadm init, But then kubeadm init will fail because of either kubelet is not healthy or kubelet is not running. |
@moqichenle did you run Can you run |
Yes I did run the two commands before the init. |
@moqichenle Do you see anything suspicious (errors, warnings) in the output of |
Ah, I see. Thank you. :) When I ran kubeadm init, if the cgroup drive settings are different: When the cgroup drive settings are the same, |
@moqichenle it looks like a docker issue to me. It's not related to this one I believe. You can search for "context deadline exceeded" for more info. |
@bart0sh Yep, Don't think it s related to this issue anymore. will do. Thank you very much :D |
this PR should help to decrease confusion caused by running 'kubelet logs', 'kubelet status' and other non-existing kubelet commands: #61833 It makes kubelet to produce error and exit if it's run with incorrect command line. Please, review. |
Hi, I can reproduce this issue on 1.10, just to check is this a bug and will be fixed in v1.11? |
IMO this is a configuration mismatch between Before running |
@dixudx I‘m trying to install the k8s followed by installation guide from website https://kubernetes.io/docs/setup/independent/install-kubeadm/,and steps were hold by this issue, below is the details of my enviroment; OS:
Docker:
K8S:
The cgroup between docker and kubelet
It's the same cgroup as systemd, hence no need to adjust cgroup of kubelet manually. And I start to run kubelet but failed due to error message as mentioned
The key info I see from log is CgroupDriver still cgroupfs, I guess that's the reason caused cgroup mismatch issue, but have no idea how to adjust this default value? can you help to clarify for it, thanks! |
@wshandao Please stop using The correct way to see the log is |
Thanks @dixudx, my mistake and this is not actually issue to hold my installation |
i second the requests to close this one. this is independent of kubeadm and is more of a kubelet vs docker issue. similar reports:
i have tested this on 3 different bare-bone Ubuntu 16.04.2, 16.04.0, 17.04 and it appears that the docker driver is unlike the user report in the original post where docker is using given my tests i don't see a need to add what the kublet should probably do for a friendly UX is to always match the docker driver automatically. |
@neolit123 agreed. However I do think we should open a troubleshooting doc issue JIC. |
I had this same problem on Ubuntu 16.04, Kube version v1.10.4 . Docker version 1.13.1
I have modified the config in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Then did a systemctl daemon-reload and service kubelet restart . |
we are improving our troubleshooting docs, but also in 1.11 and later the cgroup driver for docker should be automatically matched by kubeadm. |
I do think it's a bug. I checked the docker version and kubeadm file, of course the kubeadm script does that check too. however i get the mismatch err msg. If someone ever read carefully you can see some of above has the issue AFTER correctly set the parameter. |
this is still happening, nothing worked! |
BUG REPORT
Versions
kubeadm version:1.9.0-00 amd64
kubelet version:1.9.0-00 amd64
kubernetes-cni:0.6.0-00 amd64
docker-ce version:17.12.0
ce-0ubuntu amd64system version:Ubuntu 16.04.3 LTS
Physical machine
Problems
install kubernetes cluster on ubuntu 16.04. When running kubeadm init,there is an error:
[init] This might take a minute or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
After i saw the syslog /var/log/syslog, got errors as follow:
Jan 04 16:20:58 master03 kubelet[10360]: W0104 16:20:58.268285 10360 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Jan 04 16:20:58 master03 kubelet[10360]: W0104 16:20:58.269487 10360 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Jan 04 16:20:58 master03 kubelet[10360]: I0104 16:20:58.269527 10360 docker_service.go:232] Docker cri networking managed by cni
Jan 04 16:20:58 master03 kubelet[10360]: I0104 16:20:58.274386 10360 docker_service.go:237] Docker Info: &{ID:3XXZ:XEDW:ZDQS:A2MI:5AEN:CFEP:44AQ:YDS4:CRME:UBRS:46LI:MXNS Containers:0 ContainersRunning:0 Cont
Jan 04 16:20:58 master03 kubelet[10360]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
And i checked docker cgroup driver: docker info |grep -i cgroup
Cgroup Driver: systemd
Versions
kubeadm version (use
kubeadm version
):Environment:
kubectl version
):uname -a
):What happened?
What you expected to happen?
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
The text was updated successfully, but these errors were encountered: