Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuous CPU usage on Raspberry Pi 3 Model B #9475

Closed
alexdrl opened this issue Feb 23, 2018 · 10 comments
Closed

Continuous CPU usage on Raspberry Pi 3 Model B #9475

alexdrl opened this issue Feb 23, 2018 · 10 comments

Comments

@alexdrl
Copy link

alexdrl commented Feb 23, 2018

Bug report

System info: InfluxDB 1.4.3 (latest Docker image on arm)

Steps to reproduce:

Hello, I am seeing a continous high CPU usage running this container on a Raspberry Pi 3

image
Also, the memory usage is a bit odd

I am hosting grafana, influxdb and home-assistant here. I have noticed this when I started to get monitoring from this containers using telegraf.
img_20180223_121937

I have tried to stop grafana and home-assistant, and the spikes of CPU usage continue. What can I do?

Expected behavior: Normal CPU usage, and some CPU spikes.

Actual behavior: High CPU usage.

Additional info: [Include gist of relevant config, logs, etc.]

I've discovered now this debug commands, so in a few hours I'll post the result of some of this commands.

Also, if this is an issue of for performance, locking, etc the following commands are useful to create debug information for the team.

curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"

curl -o vars.txt "http://localhost:8086/debug/vars"
iostat -xd 1 30 > iostat.txt

vars.txt
profiles.tar.gz

I have no iostat installed in the Raspberry Pi, but I can install it if needed.

@dgnorton
Copy link
Contributor

@alexdrl could you retest this with the latest 1.5.1?

@alexdrl
Copy link
Author

alexdrl commented Mar 23, 2018

I am using the 1.5.1 version, coming from the influxdb:latest docker image, and I am seeing the same behaviour... Do you want me to reupload some logs?

@dgnorton
Copy link
Contributor

@alexdrl I checked internally and we have at least one person running 1.4.2 on an RPi 3 and seeing ~6% cpu usage but he's not running in docker. Could you test InfluxDB outside of docker?

@alexdrl
Copy link
Author

alexdrl commented Mar 23, 2018

@dgnorton this weekend I'll check that. I'll install the .deb package. Thanks!

@dgnorton
Copy link
Contributor

@alexdrl two other ideas brought up during our internal discussion were:

  • Building influxd with Go 1.10 might help. One of the team looked at the profile you posted (thanks for that by the way) and noticed it was spending a lot of time in HLL Count. Go 1.10 added support for more ARM instructions, which may improve perf on the RPi.
  • Another idea was to disable the internal monitor because that appears, from the profile, to be creating a lot of load. The monitor can be disabled in the config:
[monitor]
   store-enabled = false

@alexdrl
Copy link
Author

alexdrl commented Mar 24, 2018

Another idea was to disable the internal monitor because that appears, from the profile, to be creating a lot of load. The monitor can be disabled in the config:

Wow, this has changed the stable at 100% CPU usage to a stable 0% CPU with usage spikes :D

What does that component do? I'll post another graph of the CPU and memory utilisation with more hours of logging with telegraf.

PD: Improving the general speed of some ARM instructions using go 1.10 also seem like a good idea.

@dgnorton
Copy link
Contributor

dgnorton commented Mar 24, 2018

Wow, this has changed the stable at 100% CPU usage to a stable 0% CPU with usage spikes :D

That's great news!

What does that component do?

Monitoring keeps track of InfluxDB's internal stats like number of points written, estimate of number of series, etc. Useful for diagnosing problems sometimes. Some of these internal stats can be expensive to compute with 100% accuracy. However, that level of accuracy isn't always needed. E.g., if influxd were OOMing, it might be helpful to know about how many series have been written. If the estimate says there are 200M series but there's really only 195M, that's close enough. Either of those is likely to cause a problem with an in-memory index. As mentioned in my earlier comment, InfluxDB uses HLL to compute that estimate. It's efficient on Intel hardware but hasn't had much testing on ARM. It will be interesting to see if building influxd with Go 1.10 improves ARM performance for internal monitoring.

Thanks for reporting. I'm going to close this issue since that config change seems to have fixed your problem. If you feel there's still an issue with this, we can reopen the issue.

@alexdrl
Copy link
Author

alexdrl commented Mar 25, 2018

Thank for you for looking and solving the issue, and for the complete explanation. Ping me if you want me to test a Go 1.10 compiled version with that option enabled.

@krgnl
Copy link

krgnl commented May 4, 2018

Not sure if the bug should be re-opened, but I experienced the same behaviour on a Raspberry Pi Zero. I am running an instance of Influxdb with only 2 writes each 11 seconds. I saw lots of empty values so I started investigating. Turns out influxdb was running on close to full cpu most of the time.

The above config change seems to have fixed my problems partly. I am seeing 4-6% cpu utilization on writes, rather than 97% continously.

I am not a developer but this smells buggy to me? Thanks a lot for the solution though! =)

@djsomi
Copy link

djsomi commented Sep 7, 2020

For some strange reason my solution was to stop service, start influxd by hand, let it run for a while, and high cpu load went. :D

psalin added a commit to psalin/ruuvimon that referenced this issue Jul 16, 2021
Used a lot of CPU on Raspberrys.
influxdata/influxdb#9475
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants