Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spinnaker uses way too much CPU #3962

Closed
ricklagerweij opened this issue Feb 12, 2019 · 8 comments

Comments

@ricklagerweij
Copy link

commented Feb 12, 2019

Issue Summary:

There are 7 kubectl context -o json for all resources of the cluster using 100% CPU each (running on vSphere), this causes the spinnaker VM to use 1/3th of an esxi node.

Cloud Provider(s):

Kubernetes V2 Provider

Environment:

Localdebian, spinnaker v1.12.1. halyard 1.15, kubernetes 1.13.0

Feature Area:

Not sure which microservice is running the kubectl contexts for output.

Description:

I'm using account provider v1 & v2, each cluster has a v1 and v2 account configured.

Steps to Reproduce:

Just run the Spinnaker service as usual with a hal deploy apply.

Additional Details:

This also happened on our distributed environment, migrating it back to the virtual machine has identified the issue. Spinnaker pods on Kubernetes cluster would crash after 40-48 hours.

@spinnakerbot

This comment has been minimized.

Copy link

commented Mar 31, 2019

This issue hasn't been updated in 45 days, so we are tagging it as 'stale'. If you want to remove this label, comment:

@spinnakerbot remove-label stale

@spinnakerbot spinnakerbot added the stale label Mar 31, 2019

@dniel

This comment has been minimized.

Copy link

commented May 7, 2019

@maggieneterval I'm experiencing the exact same issue with kubernetes 1.14.1, spinnaker 1.13.6 and vsphere. Did you find out anything about this problem?

@maggieneterval

This comment has been minimized.

Copy link

commented May 7, 2019

Hey @dniel apologies, I'm not actively investigating this issue right now (just added the tag to help us organize open issues), but here's a link to a possibly related Kubernetes v2 performance issue where an active troubleshooting discussion is taking place: #4367

@mattnworb

This comment has been minimized.

Copy link

commented May 7, 2019

are your pods crashing due to OutOfMemoryErrors? If so, high CPU can be from the JVM trying hard to free up any free space in GC and failing, and continually trying to collect. If you don't see OOMs then it would seem very separate from #4367.

@dniel

This comment has been minimized.

Copy link

commented May 8, 2019

@mattnworb
I dont see any errors in the pod logs, all pods has status Running.
What I see is when running the top command on the worker nodes with Spinnaker running it show lots of kubectl commands with reference to kubeconfigs maintained by spinnaker, each using alot of CPU.

@spinnakerbot

This comment has been minimized.

Copy link

commented Jun 22, 2019

This issue is tagged as 'stale' and hasn't been updated in 45 days, so we are tagging it as 'to-be-closed'. It will be closed in 45 days unless updates are made. If you want to remove this label, comment:

@spinnakerbot remove-label to-be-closed

@spinnakerbot

This comment has been minimized.

Copy link

commented Aug 6, 2019

This issue is tagged as 'to-be-closed' and hasn't been updated in 45 days, so we are closing it. You can always reopen this issue if needed.

@romvdms

This comment has been minimized.

Copy link

commented Aug 30, 2019

I’m seeing the same problem, a large number of kubectl commands and system load always at 100%.
Just started happening out of nowhere.
Running spinnaker on Ubuntu and using kubernetes V2 provider in gcp.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.