-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reopen for issue 75565 (high CPU utilization in Docker for mac desktop) #100415
Comments
this could be a docker for mac problem. /sig node |
Per the original report 75565 the issue appeared to be in kubernetes perhaps in an issue with polling multiple times a second. "Idle master constantly burns CPU/disk polling itself #75565" "Foritus commented on Mar 25, 2019 This can certainly not have been closed as chatty 'by design'? At 8 CPUs allocated to Docker for Mac desktop battery life, heat, and CPU utilization are not practical. Is is possible to work with the Docker for Mac team and utilize a new hook/switch/something that would tell etcd polling to enter a 'mobile' or CPU friendly mode and lesson whatever polling is being done here? I do not currently have a linux laptop system to test this on. |
|
There does seem to be unclarity around who's issue this is. Docker for mac team is saying kubernetes. If there are tuning properties for etcd, and as this issue manifests only when kubernetes in Docker for mac is active, and the issue increases when more CPUs are allocated to Docker for mac, how can we prove whether the issues is in the kubernetes runtime or docker runtime? Docker for mac team says this is in kubernetes runtime. Are there etcd tuning parameters which Docker for mac can/should set? What additional information can I collect to help clarify the root cause and then move forward to a resolution? |
I'll repeat my comment from the other issue:
As far as whose "issue" this is, it's that kubernetes runtime actually is doing work even when the system is "idle" in order to make sure all components are still alive and ready etc. Possibly you could compare the same install on different platforms to see if actually we somehow are triggering a docker for mac issue, but I kind of doubt that. |
I'm sorry this is probably not what you wanted to hear, but ...
Kubernetes is not really responsible for the performance of any particular proprietary distros. If there are specific performance problems reported our contributors may take their time to look at them or anyone (new contributors) may attempt to contribute improvements. But a general complaint like this for a specific product is not very actionable for us. SIG Scalability tends to work on this for large clusters, but there is not an organized group for small clusters.
etcd is it's own project and has documentation for this.
It sounds like you've already indicated that this is an issue with etcd, which would either be a bug with etcd, or the configuration in docker for mac.
There isn't really one singular "k8s team", Kubernetes is an open governance project owned by the CNCF with contributors from all over the world working independently or at many different employers. The project is organized into SIGs that manage different portions of the code, but even that is voluntary. Each SIG has meetings. https://www.kubernetes.dev/resources/community-groups/ If the docker for mac team have an actionable Kubernetes bug, they should file it. |
Re: etcd: it's almost certainly the case that the rest of the cluster is using etcd, rather than there being a "performance issue" with it. The root cause is almost certainly that "idle" k8s clusters are not actually idle and this is by design so that they can recover rapidly from faults; k8s is not optimized for single-system use. There are many flags that could be configured to improve idle performance, but that's not an "issue" unless our docs are insufficiently clear (which they probably are). |
I don't have access to kube-scheduler on docker for mac. Is it an issue that the packaging of docker for mac didn't implement/provide this OR is it buried somewhere in the k8s runtime that we can get to for tuning it? |
Usually kube-scheduler runs as a pod so you can see the flags, logs etc. using the normal pod tooling (kubectl get, kubectl logs, etc.) |
/remove-sig node since the issue is now with the node configuration, but rather api chattiness /kind support As commented above, this issue looks like a support ticker to help tune kubernetes for a specific use case, rather than bug report. Also it lacks some information on how exactly kubernetes was installed and configured for any contributor to try repro and troubleshoot if anybody would be interested. I understand the desire to run everything on a single box for things like testing. And we do - with kind, for example. I don't recall serious issues with kind, but maybe it's because I run it on linux? Anyway, Kubernetes does not use issues on this repo for support requests. If you have a question on how to use Kubernetes or to debug a specific issue, please visit our forums. /close |
/remove-kind bug |
@ehashman: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/triage accepted |
/triage accepted |
did you mean to re-open the issue? Is there additional information on what's needed? |
This is just something api-machinery does to remove issues from a search for issues to triage. They do this as a group in a frequent meeting, fede's account is often the one enacting the commands but it's not fede personally. |
What Ben said :)
This is just fundamentally how k8s works, there is zero chance it will get changed. (There may still be some misconfiguration that makes a particular setup especially chatty, e.g. continually health checking something because it's not healthy.) |
#75565
Was this issue ever resolved? It is closed but the symptom is still occurring on Docker for Mac Desktop with Kubernetes 1.19.7 built in.
What happened:
Docker for mac desktop version 3.2.2 (61853)
With kubernetes activated in docker for mac desktop and 4 CPUs setting docker shows ~ 30 % CPU utilization when not doing work with no pods running.
With kubenetes activated in docker for mac desktop and 8 CPUs setting docker shows ~ 60-100% CPU utilization when not doing work with no pods running.
With kubernetes not activated docker shows 7% when not doing work and no pods running.
What you expected to happen:
Kubernetes to not show high CPU utilization.
How to reproduce it (as minimally and precisely as possible):
Turn on kubernetes in docker for mac desktop
Anything else we need to know?:
Environment:
kubectl version
): 1.19.7cat /etc/os-release
): OSX - Big Suruname -a
): 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64 x86_64The text was updated successfully, but these errors were encountered: