-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes is CPU-hungry on minikube #48948
Comments
/sig scalability Really no idea if that is the right group, but none seemed completely appropriate |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This is still a problem with the minikube v0.28.0. Could someone change the title to remove the reference to 1.7? /remove-lifecycle stale |
/remove-lifecycle rotten |
Same issue with v0.26.0. |
same issue in mac with 0.28 |
Same issue with v0.28.1. |
Maybe related; issue about CPU usage of docker-for-mac with builtin Kubernetes: |
I see this high CPU (~30% at idle) with MacOS both with Docker for Mac's bundled Kubernetes and with minikube. @lizrice described an approach for installing k8s using Vagrant without minikube, but still with VirtualBox. She noted that she was still seeing high CPU usage. minikube may not be the issue here. |
I tried using minikube with the new |
In an attempt to track this down, I ran:
and found that
I assume that the next step is be to follow https://github.com/kubernetes/community/blob/master/contributors/devel/profiling.md and instrument each of the processes in turn, to find out what they're doing when they should be idle? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale This is still happening to me in v0.30.0 |
@rcorre, or someone with permission, please edit the title to remove the reference to 1.7. |
Is there any information I can give to help debugging this? |
Can someone from sig/scalability take a look at this? I would love to help out - can test, reproduce, explore, report... This is becoming quite a blocker for local Kubernetes development! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Wondering what the best way to develop locally for kubernetes? The base load and slowness of minikube is a deal breaker for rapid development especially when using a laptop. I have thought about:
But all of these issues have there own problems. I assume that with the popularity of kubernetes and minikube that there must be a way to run a local cluster without the level of resource consumption that many people are seeing. Is there an alternative to minikube that has lower overheads? Basically an idle mysql server on my laptop is no big deal, and I don't notice it. An idling minikube cluster (without any services running mind you), is definitely felt. What can I do to help? |
@leahciMic did you try kind? https://github.com/kubernetes-sigs/kind |
I got 30-40% constant usage with minikube, 10-20% with microk8s. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I am experiencing the same problem with minikube v1.5.2 on Darwin 10.14.6 |
Similar issue, vastly hinders local development agility. Setup a new barebones single node development cluster using kubeadm (v1.17) inside a reasonably beefy vm 4 cores 4gb ram (with kvm tuning like raw disk virtio). Idle state cpu usage is around 30%, spikes to 50-60% during container creation/deletion. Controller manager and scheduler often timeout waiting for Apiserver and go to crashloopbackoff state (40+ restarts after 10 hours of ~idling). Similar issues across a range of hardware configurations on different Linux distributions. While searching around, came across quite a few stackoverflow posts and github issues complaining of similar observations. While I understand that kube is optimized for running at scale, it does - non-trivially - hurt local development. What will be great if there are some resources pointing us towards configuration changes that reduce resource consumption at the expense of reducing cluster reliability, e.g. health check/poll frequencies, etc. As developers working on local machines, many of us might be happy to take that tradeoff. |
Same issue (100% cpu for vboxheadless)
Attached is the perf report for vboxheadless (recorded with few secs while vboxheadless is at 100% CPU). |
Hm seeing this thread is a bit painfull. People keep posting, reporting but seems no one to care. Issues keep coming stale. I probably run away from the whole kubeXXX all together for such behaviour. Oh well maybe a attitude - submit a patch or shut up? |
And if someone ask me why I have nothing to do but posting here, then you can see below. But looking above makes me even not bothering to report anything .... Wake up people. If you do so, you lose users, your project will be less and less popular and eventually phased out ......
|
FWIW, minikube has vastly improved CPU overhead since v1.8 - a reduction of 19% in 2020 alone. On a modern developer machine like a MacBook Pro, minikube with hyperkit now consumes roughly 6% of available system CPU (30-40% of a single CPU core): https://docs.google.com/spreadsheets/d/1qzgVsZ9y0zqCjoQlN_LGJH3MUMqrVPexzNhdB2jzBqU/edit#gid=1614668143 It isn't perfect, but I feel like this issue can be closed, or at least moved to the minikube repo. In 2020, we'll be focusing our eyes on reducing usage in apiserver and etcd, which is where most of minikube's CPU cycles are now spent. |
@tstromberg That is great news. I hope the apiserver/etcd usage can be lowered! |
As #48948 (comment) it seems fine to close this issue at this time. /close |
@oomichi: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thank you for your job guys! |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
When running kubernetes 1.7 on minikube, CPU usage for the
VBoxHeadless
process is constantly around 100%.What you expected to happen:
CPU usage closer to 10%, as it is when running 1.6.4
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
This could be a minikube bug, but since CPU usage changes drastically between kubernetes versions (but the same minikube version), I figured it might be a kubernetes issue.
Environment:
kubectl version
): 1.7.0 vs 1.6.4uname -a
): 4.9.33-1-ltsThe text was updated successfully, but these errors were encountered: