New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADS-B Feeder | High CPU Usage on Home Assistant OS 10 #151
Comments
Same issue for me. First I made the update of the Addon to x.3 and after the update of the OS to 10.0. The CPU load increased from normaly 3% to 26%. If I restart the Addon the CPU load goes down to 1-2% for about 30s and than switch back to 26% |
Out of curiosity, does this also happen when disabling the http server? |
The FR24 feeder service seems to be the issue. Deactivating that drops the CPU usage. |
Confirmed having the same issue with 27% cpu consume on the add on page. |
Same issue for me. |
Yes after disable FR24 Feed. The cpu came back to normal. Seem like it cause the issue. |
I am waiting for a fix also, i am not a programmer so I can only help test but not fix. I have disable FR24feed for a moment waiting for a fix. If it taking longer I will have to fire up another VM just to run it out from the HA but if anyone can help fix this issue i will be much happy :) |
yeah, the update was not really ment to fix the cpu issue. For the moment I am not sure what happens there. FR24feed was updated along |
The latest update crash my HA whit oom, had to role back to 1.20.3 |
Any updates here, please? I have had to disable the addon while the issue lasts due to CPU temperature raising uncomfortably :( |
I am waiting for a fix as well :) |
Me too waiting for fix |
I just updated my OrangePi5 to docker v23, but I am having a hard time to replicate this. Running with FR24: Running without: @mrkaqz What sensor are you looking at? Where is the information from? |
Logging into the ha-os container with I started the feder service under
which looks to me like iterating over the file system maybe, but I'm not a systems programmer. With If the feeder goes through all file descriptors, then maybe setting a low limit might help. |
Okay, starting with ( ulimit -n 1024 && fr24feed ) works, as it only iterates up to 1024. Therefore I propose the following "fix": in change
to
|
Very interesting. Did you try setting the ulimits via docker/podman (--ulimit option)? Maybe this would also do the trick without having to modify the scripts in the container. |
I am not sure I can do that within the home assistant appliance. The thing is, that the feeder binary seems to iterate over all possible file descriptors (for whatever reason). So imposing that limit from outside should work as well BUT it's then imposed on everything within the container (the webserver, the other feeder, etc.). Then 1024 might be too low a number. |
That makes perfect sense. I came from the sdr-enthusiasts/docker-flightradar24 project where you linked your post here, I am running the fr24 stuff in a separate container, hence my suggestion. I didn't realize it didn't fit in the context of this project. |
In newest Image Thom-x exposes the Ulimit in Dockerfile. |
I wouldn't know where to start guessing a limit that is easy on the cpu (the lower the better, because all are iterated over) and a number that is large enough for all services included in the addon. Seeing that 1024 seemed enough for fr24feed, the million-odd from the docker file could be overly large already. It's CPU time and energy we could be cumulatively wasting here. |
@MaxWinterstein May you look at this please, the container is updated to version 1.23 now. Finger cross. Thanks |
@MaxWinterstein Hey Max, are you still actively taking care of this repository? What is your plan for this issue? In my opinion, only the ENV needs to be set. |
Sadly this is not that easy / I don't get this in total. Mentioned commits of the upstream only take care of this for the Simply overwriting the original files is not a good style, as all changes that follow need to be adjusted as well. I will try to find some flexible patching and make the value configurable, then we can figure good values out. |
Thank for your feedback. https://man7.org/linux/man-pages/man3/ulimit.3.html Maybe you can expose the ENV to addon-settings and default it to -1 so everybody can override it. So we can have a quick workaround and see if it helps. |
It's kinda ugly that Thom-x sells this as a system setting, as in fact this only targets the thttp server process. Also I am confused why there is no issues with the fr24 feed in his repository, this should be an issue for him as well?! |
Sorry, you are right. He sets ulimit in front of thttp start. So all other process before starte with unlimited. |
It's not too too uncommon to package/provide a custom run-script. My proposal was not very invasive. |
Sure, don't get me wrong. It is just another thing on the list if things I have to remember to always double check when the upstream releases. And I am lazy :) I just released an update that now contains two new settings:
This should allow for testing for everyone with issues. Feedback is highly welcomed ❤️ |
Updated and been running for hours, not seeing high CPU usage so far
with FR24FEED_ULIMIT_N
1024.
[image: image.png]
thanks a lot for the update.
…On Tue, Jun 6, 2023 at 4:01 AM Tom ***@***.***> wrote:
Installing as we speak/type
—
Reply to this email directly, view it on GitHub
<#151 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABVKQU2BI3XOK34VSURUYGLXJZCLVANCNFSM6AAAAAAXGTOXEI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Thanks for the patch. CPU wants down from 34 to 3 %. I set both ulimit to 1024 which was iirc the default in docker < v23 ( Hassos < v19) |
Perfect Result even with default settings 🤗👍 |
Super happy to see that this improved the CPU issue thing 🥳 But, still confused why the upstream (thom-x repo) seems to not have this issue 🤔 |
Alright, thanks for the ride everyone, it seems we have a solution figured out. Thanks ❤️ |
Edit:
Originally posted by @maweki in #149 (comment)
The text was updated successfully, but these errors were encountered: