Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report resource limits and usage of Kubelet and other daemons #490

Closed
dchen1107 opened this issue Jul 16, 2014 · 12 comments · Fixed by kubernetes-retired/heapster#285
Closed
Assignees
Labels
area/introspection area/isolation priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Milestone

Comments

@dchen1107
Copy link
Member

Issue #147 was filed for QoS tiers. But to really guarantee QoS for services / PODs / Containers, we need to enforce daemons' resource limits, at least tracking their usages. Currently cAdvisor is running inside a docker container, we could put Kubelet and other daemons into their own docker containers, then rely on cAdvisor to monitor and report their usages.

@brendandburns
Copy link
Contributor

Getting the kubelet inside a container will be tricky, as it needs to talk to Docker (do-able, but tricky). Likewise, getting the service proxy in is a good idea.

--brendan

@thockin
Copy link
Member

thockin commented Aug 12, 2014

The real trick will be getting docker into a container.

On Sun, Aug 3, 2014 at 9:05 PM, brendandburns notifications@github.com
wrote:

Getting the kubelet inside a container will be tricky, as it needs to talk
to Docker (do-able, but tricky). Likewise, getting the service proxy in is
a good idea.

--brendan

Reply to this email directly or view it on GitHub
#490 (comment)
.

@bgrant0607
Copy link
Member

/cc @proppy

@bgrant0607 bgrant0607 changed the title Tracking Kubelet and other daemons Report resource limits and usage of Kubelet and other daemons Oct 4, 2014
@bgrant0607 bgrant0607 added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. area/isolation and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Oct 4, 2014
@bgrant0607 bgrant0607 added this to the v1.0 milestone Oct 4, 2014
@bgrant0607 bgrant0607 added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Dec 3, 2014
@bgrant0607
Copy link
Member

@ArtfulCoder

@goltermann goltermann removed this from the v1.0 milestone Feb 6, 2015
@dchen1107 dchen1107 removed this from the v1.0 milestone Feb 6, 2015
@davidopp davidopp added the sig/node Categorizes an issue or PR as relevant to SIG Node. label Feb 8, 2015
@bgrant0607 bgrant0607 added this to the v1.0 milestone Feb 28, 2015
@dchen1107
Copy link
Member Author

We are running master components in pod, which resource usage will be picked up by cAdvisor and heapster automatically. But we need put kubelet, kube-proxy and docker to cgroups, so that cAdvisor can monitor their usages. cc/ @vmarmol

@dchen1107 dchen1107 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Mar 27, 2015
@vishh
Copy link
Contributor

vishh commented Mar 27, 2015

Is this required for v1.0?

On Fri, Mar 27, 2015 at 11:49 AM, Dawn Chen notifications@github.com
wrote:

We are running master components in pod, which resource usage will be
picked up by cAdvisor and heapster automatically. But we need put kubelet,
kube-proxy and docker to cgroups, so that cAdvisor can monitor their
usages. cc/ @vmarmol https://github.com/vmarmol


Reply to this email directly or view it on GitHub
#490 (comment)
.

@ArtfulCoder
Copy link
Contributor

I spoke offline with Victor.
I can try and put kube-proxy in a pod. I can atleast do a quick test to make sure that it works.
#5419

@dchen1107
Copy link
Member Author

Both Kubelet and Kube-proxy are in resource container already.

@ArtfulCoder
Copy link
Contributor

ah , ok. so, nothing to be done here..

On Wed, Apr 29, 2015 at 10:10 PM, Dawn Chen notifications@github.com
wrote:

Both Kubelet and Kube-proxy are in resource container already.


Reply to this email directly or view it on GitHub
#490 (comment)
.

@vmarmol
Copy link
Contributor

vmarmol commented Apr 30, 2015

The argument was that if we got the Kube-proxy in a container we'd get the
starts automatically. Otherwise we need to export them in Heapster. Since
we were looking to place the Kube-proxy in a container, this route may not
be too bad.
On Apr 29, 2015 10:12 PM, "Abhi Shah" notifications@github.com wrote:

ah , ok. so, nothing to be done here..

On Wed, Apr 29, 2015 at 10:10 PM, Dawn Chen notifications@github.com
wrote:

Both Kubelet and Kube-proxy are in resource container already.


Reply to this email directly or view it on GitHub
<
#490 (comment)

.


Reply to this email directly or view it on GitHub
#490 (comment)
.

@ArtfulCoder
Copy link
Contributor

putting in a pod, we would get the stats just the way we get it for other
pods..
Is that ok with you Dawn ?

On Wed, Apr 29, 2015 at 10:17 PM, Victor Marmol notifications@github.com
wrote:

The argument was that if we got the Kube-proxy in a container we'd get the
starts automatically. Otherwise we need to export them in Heapster. Since
we were looking to place the Kube-proxy in a container, this route may not
be too bad.

On Apr 29, 2015 10:12 PM, "Abhi Shah" notifications@github.com wrote:

ah , ok. so, nothing to be done here..

On Wed, Apr 29, 2015 at 10:10 PM, Dawn Chen notifications@github.com
wrote:

Both Kubelet and Kube-proxy are in resource container already.


Reply to this email directly or view it on GitHub
<

#490 (comment)

.


Reply to this email directly or view it on GitHub
<
#490 (comment)

.


Reply to this email directly or view it on GitHub
#490 (comment)
.

@dchen1107
Copy link
Member Author

@ArtfulCoder We need resource container for docker no matter what. @vmarmol is doing that. We can put kube-proxy into a pod, but not kubelet itself. Kubelet can run as a container, but not as pod today. There is a chicken egg issue anyway.

vishh pushed a commit to vishh/kubernetes that referenced this issue Apr 6, 2016
Basic event handler to monitor events as they occur and grab events that have already happened
soltysh pushed a commit to soltysh/kubernetes that referenced this issue Jan 5, 2021
Bug 1897603: UPSTREAM: 96673: Fix Cinder volume detection on OpenStack Train
b3atlesfan pushed a commit to b3atlesfan/kubernetes that referenced this issue Feb 5, 2021
…cker

Add functional (end-to-end) testing
linxiulei pushed a commit to linxiulei/kubernetes that referenced this issue Jan 18, 2024
Bump some major dependencies to latest versions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/introspection area/isolation priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants