Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet: parallelize cleaning up containers in unwanted pods #7048

Merged
merged 1 commit into from
Apr 20, 2015

Conversation

yujuhong
Copy link
Contributor

Kubelet kills unwanted pods in SyncPods, which directly impact the latency of a
sync iteration. This change parallelizes the cleanup to lessen the effect.

Eventually, we should leverage per-pod workers for cleanup, with the exception
of truly orphaned pods.

@vmarmol vmarmol self-assigned this Apr 20, 2015
@yujuhong
Copy link
Contributor Author

This addresses #6764

wg.Add(1)
go func(container *kubecontainer.Container) {
defer util.HandleCrash()
// call the networking plugin for teardown
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a TODO to handle this without singling our the pod infra container? Main motivation is for Rocket integration which does not have one. We're working on moving the pod infra to be a Docker-only concept.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@vmarmol
Copy link
Contributor

vmarmol commented Apr 20, 2015

Minor comments, looks good overall. Thanks @yujuhong!

for i, c := range pod.Containers {
containerIDs[i] = string(c.ID)
}
return kl.containerManager.GetRunningContainers(containerIDs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thought not related to this PR, but I always have a question about this, I am not sure why the running containers are necessary for removing volumes. I suspect that if the volume is still being used by some container, TearDown() will just return an error (such as device busy).I think if we can remove this logic, it would be great.

As I can remember, this is part of a fix for volume leaks, but I am not sure the original problem is caused by deleting alive volumes..

Let me find out the issue number...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that in theory we should be able to remove it and the volume plugins would handle it. However, it seems like we've had quite a few bugs in this area which lead to data loss. That is what has kept the current logic I believe.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

@vmarmol
Copy link
Contributor

vmarmol commented Apr 20, 2015

LGTM, we can squash and merge

Kubelet kills unwanted pods in SyncPods, which directly impact the latency of a
sync iteration. This change parallelizes the cleanup to lessen the effect.

Eventually, we should leverage per-pod workers for cleanup, with the exception
of truly orphaned pods.
@yujuhong
Copy link
Contributor Author

Squashed. Thanks!

vmarmol added a commit that referenced this pull request Apr 20, 2015
Kubelet: parallelize cleaning up containers in unwanted pods
@vmarmol vmarmol merged commit d44e9b4 into kubernetes:master Apr 20, 2015
@yujuhong yujuhong deleted the para_cleanup branch May 8, 2015 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants