-
-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very high CPU with v0.16.0 #1037
Comments
I recently discovered an infinite loop in PlantUML [1] which is already fixed on the latest snapshot but in your case it seems that Chrome is taking a lot of CPU. If it's indeed related to a particular container, as a workaround, you can restart it. |
Not sure. We have a lot of traffic, but it's almost completely |
We've just increased the machine size now to avoid issues over the holidays. But looks like maybe it was just a temporary thing. |
Thats odd... PlantUML and ditaa are not using Chrome. Anyway, if you find the root cause or investigate further please share the results (even in raw form). |
I'm seeing the same thing. After starting the latest containers immediately seeing a bunch of The logs of The
|
Might be an issue with the latest version of Chrome, not sure... does restarting the containers solve this issue? |
When I restarted the services again on January 3rd, the issue was gone. No idea what happened here. |
Since @max-wittig had the same issue it would be interesting to compare your runtime environment:
It might help us identify the root cause. Anyway, I'm glad this is a one-time issue but at the same I'm a bit confused 🤔 |
@Mogztter I'm on the same team as @max-wittig. Unfortunately, even after increasing the node size, we've just recently run out of credits on the instance, so it's not a one time thing, the latest version increased the CPU requirements significantly. |
@dlouzan Could you please provide information about your runtime environment? Version 0.16.0 has been running fine on https://kroki.io for more than 2 weeks now and we haven't see any high CPU usage. Host
Versions
docker stats
CPU 95th percentile last 2 weeksGreen: Docker CPU total (percent) System load 95th last 2 weeksPink: system load 1 As you can see everything is looking good and we generated approximately 650K diagrams over the last 2 weeks. |
Any update? are you still experiencing issues with 0.16.0? Did you revert back to 0.15.1? |
Yeah we reverted back to |
Same here with 0.16.0. Let me describe the problem and the "workaround" I found. Using the documented start-up procedure with the example docker-compose.yml (
As you can see, the excalidraw and bpmn are not doing well (same Now if I stop these and simply restart them:
Problem gone! Once more:
In all of the above runs, only bpmn and excalidraw had the issue, but that's not always the case, the mermaid service can be affected as well:
Same procedure, a It gets weirder: whether I do Sorry for the long post, I thought that maybe the full output may enable you to see things I didn't. Also:
I thought I would also verify that reverting to 0.15.1 also "fixes" the issue. It does.
|
Thanks for the detailed report it really helps!
I'm using almost the same version on kroki.io and I was able to reproduce this issue: $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal $ uname -a
Linux xxx 5.4.0-91-generic #102-Ubuntu SMP Fri Nov 5 16:31:28 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux $ docker version
Client: Docker Engine - Community
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:45:27 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.16.12
Git commit: 459d0df
Built: Mon Dec 13 11:43:36 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0 I can only reproduce it when I start more than one container that relies on Puppeteer. I also cannot reproduce it on my machine using the latest version of |
Probably an issue between Docker and Puppeteer/Chrome, we did upgrade from puppeteer 10.4.0 to 12.0.1. Puppeteer 10.4.0 was using Chromium 92.0.4512.0 (r884014) whereas Puppeteer 12.0.1 is using Chromium 97.0.4692.0 (r938248). Hopefully this issue is fixed in the latest version of Puppeteer 13.1.2 (using Chromium 98.0.4758.0 (r950341)). We might also consider disabling a few features from Chromium: https://peter.sh/experiments/chromium-command-line-switches/
We might also try to add |
I was able to reproduce this issue using |
|
Also having this issue. My first thought was perhaps the chrome instances were thrashing due to hitting a critical memory usage point, but after reading these comments, I'm not sure. Running similar config to those listed. Will revert to 0.15.1 for now until I hear more back from you all. |
As you pointed to Puppeteer, I suppose you already found the following issue: puppeteer/puppeteer/issues/7892. Not much action on it yet, though :/ |
Interesting, especially this part:
Using |
I can still reproduce this issue using For reference, Puppeteer 13 is based on Chromium 100.0.4889.0 which is not yet available on Alpine. The latest version available on Alpine edge is Chromium 98.0.4758.102-r3. |
Seems like that since upgrading to version 0.16.0, the machine requirements increased a lot. Before our machine was almost idle at maximum ~25%. Now it's always going to 100%.
Is this expected?
@Mogztter
/cc @dlouzan @ercanucan
The text was updated successfully, but these errors were encountered: