-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some containers show zero CPU and RAM usage #1284
Comments
I too am running into this issue on some of our instances recently.. as of about last month when a new version of Docker was released to AWS. This seems to start to happen about an hour or two when the tasks are up and running. Then the tasks slowly degrade over time and report 0 CPU%. This is causing a problem for us because we rely heavily on CPU to make scale up and down. When these tasks are falsely reporting metrics, it artificially increases the tasks that are needed to compensate for it.
We have noticed the following logs that may offer clues. When we see this happen, that's when the instance stops reporting the CPU stats over to AWS ECS Agent.
|
Any updates on this? |
I believe this is likely due to docker or containerd compiling with the wrong protos. /cc @tianon maybe for the Ubuntu distro packages? |
I don't think either package in Ubuntu explicitly generates the protos: 😕 |
Are you by chance running any game servers with the panel Pterodactyl? |
@FrankSealover We are running Pterodactyl and are experiencing this particular issue as well. Did you ever manage to resolve the issue on your end? |
I'm getting this as well
Ubuntu 18.04.6 |
Per investigation here, it seems to be related to setting I'm not sure if there's some other setting that triggers it. |
Found the issue, moby is using outdated containerd/cgroups vendor.
|
Fixes - docker/for-linux#1284 - containerd/containerd#6700 - moby#43387 Update to cgroups v1.0.1 which has the current proto for cgroupsv1 Need to update cilium/ebpf dependency to v0.4.0
Fixes - docker/for-linux#1284 - containerd/containerd#6700 - moby#43387 Update to cgroups v1.0.1 which has the current proto for cgroupsv1 Need to update cilium/ebpf dependency to v0.4.0 Signed-off-by: Wim <wim@42.be>
Fixes - docker/for-linux#1284 - containerd/containerd#6700 - moby/moby#43387 Update to cgroups v1.0.1 which has the current proto for cgroupsv1 Need to update cilium/ebpf dependency to v0.4.0 Signed-off-by: Wim <wim@42.be>
Fixes - docker/for-linux#1284 - containerd/containerd#6700 - moby/moby#43387 Update to cgroups v1.0.1 which has the current proto for cgroupsv1 Need to update cilium/ebpf dependency to v0.4.0 Signed-off-by: Wim <wim@42.be>
Fixes - docker/for-linux#1284 - containerd/containerd#6700 - moby/moby#43387 Update to cgroups v1.0.1 which has the current proto for cgroupsv1 Need to update cilium/ebpf dependency to v0.4.0 Signed-off-by: Wim <wim@42.be>
Fixes - docker/for-linux#1284 - containerd/containerd#6700 - moby/moby#43387 Update to cgroups v1.0.1 which has the current proto for cgroupsv1 Need to update cilium/ebpf dependency to v0.4.0 Signed-off-by: Wim <wim@42.be>
Fixes - docker/for-linux#1284 - containerd/containerd#6700 - moby/moby#43387 Update to cgroups v1.0.1 which has the current proto for cgroupsv1 Need to update cilium/ebpf dependency to v0.4.0 Signed-off-by: Wim <wim@42.be>
Expected behavior
When sending command
docker stats
, I see CPU/RAM/disk usage for all my containers.Actual behavior
Some containers show CPU and RAM as 0% / 0 kB, however they are clearly using RAM as they are game servers. I can see they use about 1GB of RAM by looking at
htop
. Only some containers have this issue.Steps to reproduce the behavior
Unfortunately, I have no clue. One day it was reporting everything, then I came back after some weeks and it was like this.
However, what I can say is that if I look into Portainer (this container was not created with Portainer), while it does report usage on other working containers, it says
cannot read property 'find' of null
on the zero ones.Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.)
I'm running on Ubuntu 20.04 on a KVM VPS (x64).
I noticed that a similar issue was reported on moby, however the suggested solution doesn't work (adding
GRUB_CMDLINE_LINUX_DEFAULT="quiet cgroup_enable=memory swapaccount=1"
to/etc/default/grub
, updating grub and rebooting).The text was updated successfully, but these errors were encountered: