Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Scope has no PIDs. Refusing" messages when using --cgroup-driver=systemd #71887

Open
kvaps opened this issue Dec 9, 2018 · 15 comments

Comments

Projects
None yet
@kvaps
Copy link
Contributor

commented Dec 9, 2018

What happened:

I have strange issue when using systemd cgroups driver, example messages:

systemd[1]: libcontainer-29519-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
systemd[1]: libcontainer-29519-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
systemd[1]: Created slice libcontainer_29519_systemd_test_default.slice.
kubelet[6547]: W1209 03:04:49.649981    6547 container.go:422] Failed to get RecentStats("/libcontainer_29519_systemd_test_default.slice") while determining the next housekeeping: unable to find data in memory cache
systemd[1]: Removed slice libcontainer_29519_systemd_test_default.slice.
systemd[1]: libcontainer-29554-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
systemd[1]: libcontainer-29554-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
systemd[1]: Created slice libcontainer_29554_systemd_test_default.slice.
systemd[1]: Removed slice libcontainer_29554_systemd_test_default.slice.
systemd[1]: libcontainer-29561-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
systemd[1]: libcontainer-29561-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
systemd[1]: Created slice libcontainer_29561_systemd_test_default.slice.
kubelet[6547]: W1209 03:04:50.591527    6547 container.go:523] Failed to update stats for container "/libcontainer_29561_systemd_test_default.slice": open /sys/fs/cgroup/memory/libcontainer_29561_systemd_test_default.slice/memory.use_hierarchy: no such file or directory, continuing to push stats
systemd[1]: Removed slice libcontainer_29561_systemd_test_default.slice.
systemd[1]: libcontainer-29578-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
systemd[1]: libcontainer-29578-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
systemd[1]: Created slice libcontainer_29578_systemd_test_default.slice.
kubelet[6547]: W1209 03:04:50.627114    6547 container.go:422] Failed to get RecentStats("/libcontainer_29578_systemd_test_default.slice") while determining the next housekeeping: unable to find data in memory cache

What you expected to happen:

No messages

How to reproduce it (as minimally and precisely as possible):

# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni
# cat /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "iptables": false,
  "ip-forward": false,
  "log-driver": "json-file",
  "bridge": "none",
  "log-opts": {
    "max-size": "10m",
    "max-file": "5"
  },
  "storage-driver": "overlay2"
}

Anything else we need to know?:

When switching systemd to cgroupfs problem is going out.

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: baremetal
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 9 (stretch)
  • Kernel (e.g. uname -a):
Linux pve2 4.15.18-7-pve #1 SMP PVE 4.15.18-26 (Thu, 04 Oct 2018 11:03:06 +0200) x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others: Docker version 18.06.1-ce, build e68fc7a

/kind bug

@kvaps

This comment has been minimized.

Copy link
Contributor Author

commented Dec 9, 2018

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node and removed needs-sig labels Dec 9, 2018

@discostur

This comment has been minimized.

Copy link

commented Jan 10, 2019

seeing the same messages like @kvaps

Docker Version: 18.09.1
Kernel: 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
k8s: GitVersion:"v1.10.11"

@BSWANG

This comment has been minimized.

Copy link
Contributor

commented Jan 11, 2019

This empty cgroup is created by runc for checking systemd capability:
https://github.com/opencontainers/runc/blob/76520a4bf07b0558aca3b867277d696b41978b44/libcontainer/cgroups/systemd/apply_systemd.go#L109

IMO, Kubelet/cadvisor should ignore this kind of cgroup event.

@jangras

This comment has been minimized.

Copy link

commented Jan 14, 2019

seeing the same messages:

Docker version: 18.06.0-ce
Kernel: 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64
k8s: v1.13.2

@DuncanFairley

This comment has been minimized.

Copy link

commented Jan 25, 2019

I'm getting the same, sometimes multiple times a second.

Docker version: 18.06.1-ce
Kernel: 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Ubuntu 18.04.1
k8s: v1.13.2 - I was seeing it on 1.12.3 as well.

@miry

This comment has been minimized.

Copy link
Contributor

commented Feb 18, 2019

same issue for 1.14.0-alpha3

@hchenxa

This comment has been minimized.

Copy link
Member

commented Feb 22, 2019

I also find this issue with kubernete 1.12.4 when using systemd as cgroup-driver, after the kubelet started, this will flood my syslog, but after stop the kubelet, the message only shown when operate the docker container like create/destroy docker container.

@oiooj

This comment has been minimized.

Copy link

commented Feb 25, 2019

Same issue:

Docker version: 18.09.2-ce
Linux 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
k8s: v1.12.4
  • install tools: kubeadm
@gocoolcat

This comment has been minimized.

Copy link

commented Mar 20, 2019

Seeing the same messages:

Docker Version: 17.12.1-ce
Kernel: 3.10.0-693.43.1.el7.x86_64 #‌1 SMP Thu Oct 11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
k8s: v1.10.0

I am wondering what the consequence of this is. Is it just message flooding (harmless) or can it cause any problems ?

@dmanbu

This comment has been minimized.

Copy link

commented Apr 12, 2019

Seeing the same messages:

k8s: v1.14.1
Docker Version: 18.09.4-ce
Kernel: 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

cgroup-driver=systemd

@jguitard

This comment has been minimized.

Copy link

commented Apr 17, 2019

Seeing the same messages:

k8s: v1.12.3
Docker version: 18.06.1-ce, build e68fc7a
Kernel: 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
OS: Debian GNU/Linux 9 (stretch)
Hardware configuration: QEMU/KVM (baremetal)
cgroup driver: systemd

@wiwiwa

This comment has been minimized.

Copy link

commented Apr 22, 2019

Same issue

k8s: v1.14.1
Docker Version: 18.09.4-ce
Linux k8s-2 5.0.5-arch1-1-ARCH #1 SMP PREEMPT Wed Mar 27 17:53:10 UTC 2019 x86_64 GNU/LinuxGNU/Linux

cgroup-driver: systemd

@zhtaoit

This comment has been minimized.

Copy link

commented Apr 28, 2019

Same issue
K8s: v1.14.0
Docker:18.06.3-ce
Linux:4.4.178-1.el7.elrepo.x86_64
cgroup-driver: systemd

@Tyrion85

This comment has been minimized.

Copy link

commented May 6, 2019

Same issue

k8s: 1.14.1
Docker: 18.09.5
Linux: CentOS Linux release 7.6.1810 (Core)
Linux version 3.10.0-957.10.1.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Mon Mar 18 15:06:45 UTC 2019
cgroup-driver: systemd

@hchenxa

This comment has been minimized.

Copy link
Member

commented May 15, 2019

not sure if the message has any function impact, I just disable the message by configuring the rsyslog:

[root@zen-ec-342-master-1 rsyslog.d]# pwd
/etc/rsyslog.d
 
[root@zen-ec-342-master-1 rsyslog.d]# cat ignore-systemd-session-slice.conf
if ($programname == "systemd") and ($msg contains "_systemd_test_default.slice" or$msg contains "systemd-test-default-dependencies.scope") then {
  stop
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.