Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Appease kubelet warnings on docker for mac #705

Merged
merged 1 commit into from Aug 16, 2019
Merged

Conversation

yamt
Copy link
Contributor

@yamt yamt commented Aug 5, 2019

On my environment, the name=systemd entry in /proc/self/cgroup
looks like:

13:name=systemd:/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499

Kubelet periodically complains like:

E0802 06:42:52.667123       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499/kube-proxy": failed to get cgroup stats for "/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499/kube-proxy": failed to get container info for "/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499/kube-proxy": unknown container "/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499/kube-proxy"

On my environment, the name=systemd entry in /proc/self/cgroup
looks like:

	13:name=systemd:/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499

Kubelet periodically complains like:

	E0802 06:42:52.667123       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499/kube-proxy": failed to get cgroup stats for "/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499/kube-proxy": failed to get container info for "/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499/kube-proxy": unknown container "/docker/917b388b40c70b17a3283d852d38bfcdc84d1bf8242e32a779eacd98a610e499/kube-proxy"
@@ -176,6 +176,8 @@ func checkCgroups() (root string, hasCFS bool, hasPIDs bool) {
i := strings.LastIndex(last, ".slice")
if i > 0 {
root = "/systemd" + last[:i+len(".slice")]
} else {
root = "/systemd"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems odd that /systemd would fix this. Does the below also address the problem?

Suggested change
root = "/systemd"
root = "/"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i tried it.
it yielded the messages like the following.

node_1 | W0806 02:51:34.700760 1 container_manager_linux.go:608] [ContainerManager] Failed to ensure state of "/": failed to move PID 1 (in "/docker/2fe79e61aa25ce952427a8261a795af5ecab64fe4dda22af14eb6416f784648e/kube-proxy") to "/": cpuset: cgroup parent path outside cgroup root

the cgroups tree in the container looks like the following.

spacetanuki% docker-compose exec node find /sys/fs/cgroup -name kube*
/sys/fs/cgroup/systemd/kube-proxy
/sys/fs/cgroup/systemd/kubepods
/sys/fs/cgroup/pids/kube-proxy
/sys/fs/cgroup/pids/kubepods
/sys/fs/cgroup/hugetlb/kube-proxy
/sys/fs/cgroup/hugetlb/kubepods
/sys/fs/cgroup/net_prio/kube-proxy
/sys/fs/cgroup/net_prio/kubepods
/sys/fs/cgroup/perf_event/kube-proxy
/sys/fs/cgroup/perf_event/kubepods
/sys/fs/cgroup/net_cls/kube-proxy
/sys/fs/cgroup/net_cls/kubepods
/sys/fs/cgroup/freezer/kube-proxy
/sys/fs/cgroup/freezer/kubepods
/sys/fs/cgroup/devices/kube-proxy
/sys/fs/cgroup/devices/kubepods
/sys/fs/cgroup/memory/kube-proxy
/sys/fs/cgroup/memory/kubepods
/sys/fs/cgroup/blkio/kube-proxy
/sys/fs/cgroup/blkio/kubepods
/sys/fs/cgroup/cpuacct/kube-proxy
/sys/fs/cgroup/cpuacct/kubepods
/sys/fs/cgroup/cpu/kube-proxy
/sys/fs/cgroup/cpu/kubepods
/sys/fs/cgroup/cpuset/kube-proxy
/sys/fs/cgroup/cpuset/kubepods
spacetanuki%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants