Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0% CPU usage on docker container #207

Closed
ariaieboy opened this issue Dec 11, 2023 · 11 comments · Fixed by #236
Closed

0% CPU usage on docker container #207

ariaieboy opened this issue Dec 11, 2023 · 11 comments · Fixed by #236
Assignees

Comments

@ariaieboy
Copy link

ariaieboy commented Dec 11, 2023

Pulse Version

1.0.0-beta5

Laravel Version

10.x (latest)

PHP Version

8.2 (latest)

Livewire Version

3

Database Driver & Version

Postgresql 16 on docker container

Description

In a dockerized environment Laravel Pulse always reports 0% CPU usage.

Steps To Reproduce

create a laravel project using Laravel Sail and check the CPU usage.

@ariaieboy
Copy link
Author

Ok, I checked the source and found that pulse uses the top command to check the CPU usage.
The top command in my containers reports 0% CPU usage.
I use Alpine Linux as the base image.

@jbrooksuk
Copy link
Member

It looks like this may be a limitation within top, that prevents it from being able to properly read CPU usage of virtualized resources. moby/moby#26113

@ariaieboy
Copy link
Author

ariaieboy commented Dec 11, 2023

It looks like this may be a limitation within top, that prevents it from being able to properly read CPU usage of virtualized resources. moby/moby#26113

I didn't test this on other base images like Ubuntu but using mpstat I can get the CPU usage in my container.

The problem with mpstat is that it shows the CPU average usage since the last reboot. and I don't know how we can use mpstat to calculate current CPU usage. maybe using mpstat 1 2 will helps but I don't know that it's a good replacement for this use case or not.

@jessarcher
Copy link
Member

FWIW top seems to work fine in my Ubuntu container (at least on a Linux host). What host operating system are you using?

@ariaieboy
Copy link
Author

/etc/os-release host machine

PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

/etc/os-release container

NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.18.5
PRETTY_NAME="Alpine Linux v3.18"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"

@ariaieboy
Copy link
Author

@jessarcher I don't know how but the top 0% CPU usage is fixed now here is the output of my top command:

/var/www/html # top -bn1
Mem: 9138956K used, 1041152K free, 808552K shrd, 419480K buff, 4727092K cached
CPU:  12% usr   2% sys   0% nic  82% idle   0% io   0% irq   0% sirq
Load average: 0.56 0.50 0.47 1/813 116610
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
116593   44 root     S     841m   8%   1   5% {php} swoole_http_server: worker process for x
116560   44 root     S     842m   8%   0   0% {php} swoole_http_server: worker process for x
115219   44 root     S     841m   8%   0   0% {php} swoole_http_server: task worker process for x
115941   44 root     S     841m   8%   1   0% {php} swoole_http_server: task worker process for x
115684   44 root     S     841m   8%   0   0% {php} swoole_http_server: task worker process for x
115768   44 root     S     841m   8%   0   0% {php} swoole_http_server: task worker process for x
   42    27 root     S     794m   8%   1   0% {php} swoole_http_server: master process for x
   44    42 root     S     788m   8%   0   0% {php} swoole_http_server: manager process for x
   66    53 root     S     781m   8%   0   0% /usr/local/bin/php artisan horizon:work redis --name=default --supervisor=334f0c990a46-qlZY:supervisor-1 --backoff=0 --max-time=0 --max-jobs=0 --memory=128 --queue=media --sleep=3 --timeout=120 --tries=3 --rest=0 --force
116415   53 root     S     779m   8%   2   0% /usr/local/bin/php artisan horizon:work redis --name=default --supervisor=334f0c990a46-qlZY:supervisor-1 --backoff=0 --max-time=0 --max-jobs=0 --memory=128 --queue=default --sleep=3 --timeout=120 --tries=3 --rest=0 --force
   28    25 root     S     779m   8%   1   0% /usr/bin/php /var/www/html/artisan pulse:check
115435   53 root     S     779m   8%   2   0% /usr/local/bin/php artisan horizon:work redis --name=default --supervisor=334f0c990a46-qlZY:supervisor-1 --backoff=0 --max-time=0 --max-jobs=0 --memory=128 --queue=sms --sleep=3 --timeout=120 --tries=3 --rest=0 --force
   53    26 root     S     779m   8%   2   0% /usr/local/bin/php artisan horizon:supervisor 334f0c990a46-qlZY:supervisor-1 redis --workers-name=default --balance=auto --max-processes=4 --min-processes=1 --nice=0 --balance-cooldown=3 --balance-max-shift=1 --parent-id=26 --auto-scaling-strategy=time --backoff=0 --max-time=0 --max-jobs=0 --memory=128 --queue=sms,default,media --sleep=3 --timeout=120 --tries=3 --rest=0 --force
   26    25 root     S     779m   8%   3   0% /usr/bin/php /var/www/html/artisan horizon
   27    25 root     S     779m   8%   1   0% /usr/bin/php -d variables_order=EGPCS /var/www/html/artisan octane:start --server=swoole --host=0.0.0.0 --port=8000 --max-requests=250 --workers=2
   25     1 root     S    29448   0%   1   0% {supervisord} /usr/bin/python3 /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
104359    0 root     S     1704   0%   1   0% sh
115562    0 root     S     1704   0%   1   0% sh
112068    0 root     S     1704   0%   2   0% sh
111448    0 root     S     1704   0%   3   0% sh
116437    0 root     S     1676   0%   3   0% sh
   29    25 root     S     1608   0%   0   0% /bin/sh -c while [ true ]; do (php /var/www/html/artisan schedule:run --verbose --no-interaction &); sleep 60; done
    1     0 root     S     1604   0%   3   0% {start-container} /bin/sh /usr/local/bin/start-container
116610 116437 root   R     1600   0%   2   0% top -bn1
116500   29 root     S     1588   0%   2   0% sleep 60

still the laravel pulse reports 0% CPU usage.

@jessarcher
Copy link
Member

Interesting - that output is formatted quite differently from what we're expecting. Most notably, the CPU line begins with "CPU:" but Pulse is expecting it to begin with "%Cpu(s):".

Here's the output from top on my Linux laptop:

➜  ~ top -bn1
top - 08:48:45 up 3 days, 15:20,  2 users,  load average: 1.69, 1.36, 1.22
Tasks: 579 total,   1 running, 573 sleeping,   0 stopped,   5 zombie
%Cpu(s):  9.1 us,  4.5 sy,  0.0 ni, 86.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  31737.7 total,   1337.3 free,  17214.1 used,  15534.1 buff/cache
MiB Swap:   8192.0 total,   7779.7 free,    412.2 used.  14523.6 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND

What version of top do you have?

On my host machine I have:

➜  ~ top -V
top from procps-ng 4.0.3

And in my Sail (Ubuntu) container I have:

sail@57830d912d37:/var/www/html$ top -v
  procps-ng 3.3.17

Interestingly, they both have a different CLI option for getting the version, but both follow the same format for the CPU line.

I'm guessing your version of top probably isn't the procps-ng version but rather an entirely different implementation that Alpine has chosen. The output looks more similar to the implementation Darwin ships with, although interestingly yours still supports the -bn1 flags.

@ariaieboy
Copy link
Author

@jessarcher yes the alpine uses BusyBox 1.36.1 for top command.

@jessarcher
Copy link
Member

Hi @ariaieboy, I've created #236, which updates the parsing of the top output to hopefully work for both versions of top on Linux.

@ariaieboy
Copy link
Author

@jessarcher Can we have this in a tagged release so I can test it on my instances?

@timacdonald
Copy link
Member

v1.0.0-beta8 has just been tagged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants