Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faile to set kube-reserved with Cgroup Driver: systemd on sles 15.1 #89478

Closed
xiangyu123 opened this issue Mar 25, 2020 · 3 comments
Closed

Faile to set kube-reserved with Cgroup Driver: systemd on sles 15.1 #89478

xiangyu123 opened this issue Mar 25, 2020 · 3 comments

Comments

@xiangyu123
Copy link

@xiangyu123 xiangyu123 commented Mar 25, 2020

OS: SLES 15.1
Cgroup Driver: systemd
kubelet version: v1.17.3
Docker version: 19.03.7
Mem: 16Gi
vCPU: 8core


Error:

3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: I0325 14:59:53.904466   98783 node_container_manager_linux.go:114] Enforcing kube reserved on cgroup "/system.slice/kubelet.service" with limits: map[]
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: F0325 14:59:53.904525   98783 kubelet.go:1380] Failed to start ContainerManager Failed to enforce Kube Reserved Cgroup Limits on "/system.slice/kubelet.servi>
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: goroutine 273 [running]:
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/vendor/k8s.io/klog.stacks(0xc00060b700, 0xc0008f4480, 0xd4, 0x220)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:875 +0>
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).output(0x6e7e320, 0xc000000003, 0xc0006373b0, 0x6ce9e25, 0xa, 0x564, 0x0)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:826 +0>
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).printf(0x6e7e320, 0xc000000003, 0x42e1e7f, 0x23, 0xc000dbbc58, 0x1, 0x1)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:707 +0>
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/vendor/k8s.io/klog.Fatalf(...)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:1276
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/pkg/kubelet.(*Kubelet).initializeRuntimeDependentModules(0xc0005ed500)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1380 +0x324
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: sync.(*Once).doSlow(0xc0005edca8, 0xc000fd3de8)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /usr/local/go/src/sync/once.go:66 +0xe3
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: sync.(*Once).Do(...)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /usr/local/go/src/sync/once.go:57
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/pkg/kubelet.(*Kubelet).updateRuntimeUp(0xc0005ed500)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2199 +0x4e8
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000ded360)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/ut>
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000ded360, 0x12a05f200, 0x0, 0x1, 0xc0000b80c0)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/ut>
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000ded360, 0x12a05f200, 0xc0000b80c0)
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/ut>
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: created by k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run
3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]:         /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1427 +0x162
3月 25 14:59:53 IOTPMCNINFVMSA902 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
3月 25 14:59:53 IOTPMCNINFVMSA902 systemd[1]: kubelet.service: Unit entered failed state.
3月 25 14:59:53 IOTPMCNINFVMSA902 systemd[1]: kubelet.service: Failed with result 'exit-code'.

kubelet.service:

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStartPre=/usr/bin/mkdir -p /sys/fs/cgroup/pids/system.slice/kubelet.service
ExecStartPre=/usr/bin/mkdir -p /sys/fs/cgroup/cpu/system.slice/kubelet.service
ExecStartPre=/usr/bin/mkdir -p /sys/fs/cgroup/cpuacct/system.slice/kubelet.service
ExecStartPre=/usr/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service
ExecStartPre=/usr/bin/mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service
ExecStartPre=/usr/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service
ExecStartPre=/usr/bin/mkdir -p /sys/fs/cgroup/systemd/system.slice/kubelet.service


ExecStart=/usr/local/bin/kubelet \
  --config=/etc/kubernetes/kubelet-config.yaml \
  --pod-infra-container-image=xxxx/pause-amd64:3.1 \
  --alsologtostderr=true \
  --logtostderr=false \
  --image-pull-progress-deadline=5m \
  --kubeconfig=/var/lib/kubernetes/kubelet-config \
  --register-node=true \
  --log-file-max-size=300 \
  --log-file=/tmp/kubelet.log \
  --log-flush-frequency=5s \
  --log-dir=/tmp \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

/etc/kubernetes/kubelet-config.yaml

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
staticPodPath: "/etc/kubernetes/manifests"
syncFrequency: 20s
fileCheckFrequency: 20s
httpCheckFrequency: 20s
address: 0.0.0.0
port: 10250
tlsCertFile: "/etc/pki/kubelet.crt"
tlsPrivateKeyFile: "/etc/pki/kubelet.key"
authentication:
  x509:
    clientCAFile: "/etc/pki/trust/anchors/CA.crt"
  webhook:
    enabled: true
    cacheTTL: 2m0s
  anonymous:
    enabled: false
authorization:
  mode: AlwaysAllow
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
healthzPort: 10248
healthzBindAddress: 127.0.0.1
clusterDomain: cluster.local
clusterDNS:
  - 10.24.0.2
nodeStatusUpdateFrequency: 20s
HairpinMode: "hairpin-veth"
cgroupsPerQOS: true
cgroupDriver: systemd
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
runtimeRequestTimeout: 2m0s
maxPods: 110
SerializeImagePulls: false
KubeReserved: "cpu=1,memory=2Gi"
kubeReservedCgroup: "/system.slice/kubelet.service"
podPidsLimit: -1
resolvConf: "/etc/resolv.conf"
cpuCFSQuota: true
failSwapOn: true
maxOpenFiles: 1000000
contentType: application/vnd.kubernetes.protobuf
serializeImagePulls: true
evictionHard:
  imagefs.available: 15%
  memory.available: 500Mi
  nodefs.available: 15%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
enableControllerAttachDetach: true
makeIPTablesUtilChains: true
iptablesMasqueradeBit: 14
iptablesDropBit: 15
containerLogMaxSize: 10Mi
containerLogMaxFiles: 5
enforceNodeAllocatable:
  - pods
  - kube-reserved

Anyone could help ?

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Mar 26, 2020

this report reveals a panic.

/remove-triage support
/kind bug
/sig node

@tedyu

This comment has been minimized.

Copy link
Contributor

@tedyu tedyu commented Mar 26, 2020

3月 25 14:59:53 IOTPMCNINFVMSA902 kubelet[98783]: F0325 14:59:53.904525   98783 kubelet.go:1380] Failed to start ContainerManager Failed to enforce Kube Reserved Cgroup Limits on "/system.slice/kubelet.servi>

The error seemed to be truncated.
Here is the code which logs the error:

			message := fmt.Sprintf("Failed to enforce System Reserved Cgroup Limits on %q: %v", nc.SystemReservedCgroupName, err)

Is it possible to retrieve the rest of the error ?

thanks

@xiangyu123

This comment has been minimized.

Copy link
Author

@xiangyu123 xiangyu123 commented Mar 26, 2020

@tedyu According to your tips, I have solved it. Many thanks

@xiangyu123 xiangyu123 closed this Mar 31, 2020
@xiangyu123 xiangyu123 reopened this Mar 31, 2020
@xiangyu123 xiangyu123 closed this Mar 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants
You can’t perform that action at this time.