Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provide active pods to cgroup cleanup #42585

Merged
merged 1 commit into from
Mar 7, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
15 changes: 5 additions & 10 deletions pkg/kubelet/kubelet_pods.go
Original file line number Diff line number Diff line change
Expand Up @@ -687,11 +687,7 @@ func (kl *Kubelet) podIsTerminated(pod *v1.Pod) bool {
// restarted.
status = pod.Status
}
if status.Phase == v1.PodFailed || status.Phase == v1.PodSucceeded {
return true
}

return false
return status.Phase == v1.PodFailed || status.Phase == v1.PodSucceeded || (pod.DeletionTimestamp != nil && notRunning(status.ContainerStatuses))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dashpole can we cleanup the volume code to use this active pod lister method instead? Not necessary for this PR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. Nothing would give me more joy :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@derekwaynecarr your change to podIsTerminated may cause other race conditions (e.g., pod worker still trying to start new containers). Could you remove this?

/cc @Random-Liu @dchen1107

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't have any effects during pod startup, as this change only makes a difference when the deletion timestamp is set. I agree that we should do a more thorough review of what effects this could have, though. In general, I think this is a better definition of podIsTerminated, as a pod that has been deleted, and has no containers running is definitely in a terminal state.

}

// OkToDeletePod returns true if all required node-level resources that a pod was consuming have
Expand Down Expand Up @@ -848,7 +844,7 @@ func (kl *Kubelet) HandlePodCleanups() error {

// Remove any cgroups in the hierarchy for pods that are no longer running.
if kl.cgroupsPerQOS {
kl.cleanupOrphanedPodCgroups(cgroupPods, runningPods)
kl.cleanupOrphanedPodCgroups(cgroupPods, activePods)
}

kl.backOff.GC()
Expand Down Expand Up @@ -1511,13 +1507,12 @@ func (kl *Kubelet) GetPortForward(podName, podNamespace string, podUID types.UID

// cleanupOrphanedPodCgroups removes cgroups that should no longer exist.
// it reconciles the cached state of cgroupPods with the specified list of runningPods
func (kl *Kubelet) cleanupOrphanedPodCgroups(cgroupPods map[types.UID]cm.CgroupName, runningPods []*kubecontainer.Pod) {
func (kl *Kubelet) cleanupOrphanedPodCgroups(cgroupPods map[types.UID]cm.CgroupName, activePods []*v1.Pod) {
// Add all running pods to the set that we want to preserve
podSet := sets.NewString()
for _, pod := range runningPods {
podSet.Insert(string(pod.ID))
for _, pod := range activePods {
podSet.Insert(string(pod.UID))
}

pcm := kl.containerManager.NewPodContainerManager()

// Iterate over all the found pods to verify if they should be running
Expand Down