-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report resource limits and usage of Kubelet and other daemons #490
Comments
Getting the kubelet inside a container will be tricky, as it needs to talk to Docker (do-able, but tricky). Likewise, getting the service proxy in is a good idea. --brendan |
The real trick will be getting docker into a container. On Sun, Aug 3, 2014 at 9:05 PM, brendandburns notifications@github.com
|
/cc @proppy |
We are running master components in pod, which resource usage will be picked up by cAdvisor and heapster automatically. But we need put kubelet, kube-proxy and docker to cgroups, so that cAdvisor can monitor their usages. cc/ @vmarmol |
Is this required for v1.0? On Fri, Mar 27, 2015 at 11:49 AM, Dawn Chen notifications@github.com
|
I spoke offline with Victor. |
Both Kubelet and Kube-proxy are in resource container already. |
ah , ok. so, nothing to be done here.. On Wed, Apr 29, 2015 at 10:10 PM, Dawn Chen notifications@github.com
|
The argument was that if we got the Kube-proxy in a container we'd get the
|
putting in a pod, we would get the stats just the way we get it for other On Wed, Apr 29, 2015 at 10:17 PM, Victor Marmol notifications@github.com
|
@ArtfulCoder We need resource container for docker no matter what. @vmarmol is doing that. We can put kube-proxy into a pod, but not kubelet itself. Kubelet can run as a container, but not as pod today. There is a chicken egg issue anyway. |
Basic event handler to monitor events as they occur and grab events that have already happened
Bug 1897603: UPSTREAM: 96673: Fix Cinder volume detection on OpenStack Train
…cker Add functional (end-to-end) testing
Bump some major dependencies to latest versions
Issue #147 was filed for QoS tiers. But to really guarantee QoS for services / PODs / Containers, we need to enforce daemons' resource limits, at least tracking their usages. Currently cAdvisor is running inside a docker container, we could put Kubelet and other daemons into their own docker containers, then rely on cAdvisor to monitor and report their usages.
The text was updated successfully, but these errors were encountered: