Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Massive increase in CPU usage timelion Kibana 5.4 #11646

Closed
remydb opened this issue May 8, 2017 · 2 comments
Closed

Massive increase in CPU usage timelion Kibana 5.4 #11646

remydb opened this issue May 8, 2017 · 2 comments
Labels
bug Fixes for quality problems that affect the customer experience Feature:Timelion Timelion app and visualization Feature:Visualizations Generic visualization features (in case no more specific feature label is available)

Comments

@remydb
Copy link

remydb commented May 8, 2017

Kibana version: 5.4.0

Elasticsearch version: 5.4.0

Server OS version: Ubuntu 16.04.2 LTS

Browser version: Chrome Version 57.0.2987.133

Browser OS version: MacOS 10.12.4

Original install method (e.g. download page, yum, from source, etc.): elastic.co apt repository

Description of the problem including expected versus actual behavior:
After upgrading our entire ELK stack from 5.3 to 5.4 our timelion graphs have started using an incredible amount of CPU. Just opening the timelion page in Kibana with the default ".es(*)" will cause the load to increase considerably on our cluster.

We have a number of dashboards which incorporate about 5 or 6 timelion graphs that worked just fine in 5.3, but will time-out and cause the load on the entire cluster to sky-rocket to 40+ on all 3 cluster nodes when opened in 5.4.

Steps to reproduce:
Just open any timelion visualisation.

@weltenwort weltenwort added Feature:Timelion Timelion app and visualization Feature:Visualizations Generic visualization features (in case no more specific feature label is available) bug Fixes for quality problems that affect the customer experience labels May 9, 2017
@remydb
Copy link
Author

remydb commented Jun 2, 2017

Just an update on this. I've been noticing that since the update our big queries aren't being nicely spread across our 3-node cluster. When I open certain dashboards, especially ones with timelion graphs, the load seems to increase massively on one node while the other two nodes basically sit idle. So perhaps I was wrong in saying the load was sky-rocketing on all 3 cluster nodes before...
What's also weird is that the shards for the indices we're querying will be nicely spread across all the nodes, with 1 replica on a different node, but this never seems to get used in the query.

Perhaps this should be more of an elasticsearch issue than a kibana one...

@icybin
Copy link

icybin commented Jul 1, 2017

(OT) My case is quite different. I am using Kibana 5.4.{2,3} on Ubuntu 16.4 LTS. I noticed that after running several days the Kibana (node) process always consumed 90% cpu time, though there wasn't any Kibana users (I have a nginx proxy in front of kibana so I can be sure that.) I tried to upgrade from 5.4.2 to 5.4.3 and problem resolved... but then it happened again.

Finally I found that the logrotate script which called systemctl restart kibana didn't work: System launched two different Kibana processes which shared the same configuration, hence the newly launched process was jumping and eating CPU credits. (Well, the systemctl restart kibana command couldn't kill the old process.)

I disabled systemd script, and used supervisord instead. Supervisord would never do it job wrong.

I know this quite OT, but hope it helps.

PS: It's very hard to talk about the root cause: Kibana systemd script, systemd, or Kibana. But I don't care ;)

@remydb remydb closed this as completed Sep 5, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience Feature:Timelion Timelion app and visualization Feature:Visualizations Generic visualization features (in case no more specific feature label is available)
Projects
None yet
Development

No branches or pull requests

3 participants