Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid enqueue when status of k8s pods change #835

Merged
merged 1 commit into from
Jun 15, 2020

Conversation

cwdsuzhou
Copy link
Contributor

No description provided.

@cwdsuzhou
Copy link
Contributor Author

@cpuguy83 PTAL

We should ignore podStatus change, it not. too many useless enqueue here.

pc.k8sQ.AddRateLimited(key)
// ignore ResourceVersion change
oldPod.ResourceVersion = newPod.ResourceVersion
if !cmp.Equal(oldPod.Spec, newPod.Spec) ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're trying to achieve update suppression, you can use:

func podsEqual(pod1, pod2 *corev1.Pod) bool {

Also, you need to check deletion timestamp and phase change IIRC

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, wo should Ignore status update here. But we need check spec, deleteTime, Annotation, Labels and so on. So I exclude pod status comparison

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a more targeted equal method that tests the super set of the attributes of podequal. Something like:

podNeedsWork(old, new *v1.pod) bool {
 if !podsEqual(...) { return true };
 if ... return true
 return false
}

I'm curious, where is the perf concern?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks, PTAL thanks

newPod := newObj.(*corev1.Pod)

// At this point we know that something in .metadata or .spec has changed, so we must proceed to sync the pod.
if key, err := cache.MetaNamespaceKeyFunc(newPod); err != nil {
log.G(ctx).Error(err)
} else {
pc.k8sQ.AddRateLimited(key)
// ignore ResourceVersion change
oldPod.ResourceVersion = newPod.ResourceVersion
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aren't these cache references? So you're mutating the cache here?

pc.k8sQ.AddRateLimited(key)
// ignore ResourceVersion change
oldPod.ResourceVersion = newPod.ResourceVersion
if !cmp.Equal(oldPod.Spec, newPod.Spec) ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a more targeted equal method that tests the super set of the attributes of podequal. Something like:

podNeedsWork(old, new *v1.pod) bool {
 if !podsEqual(...) { return true };
 if ... return true
 return false
}

I'm curious, where is the perf concern?

@sargun
Copy link
Contributor

sargun commented Jun 9, 2020

Somehow my review got split. None the less, I think this is a fine idea.

@cwdsuzhou
Copy link
Contributor Author

@sargun PR updated, thanks

if !podsEqual(oldPod, newPod) {
return true
}
if !oldPod.DeletionTimestamp.Equal(newPod.DeletionTimestamp) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the earlier patch, you checked all of object meta. Any reason for this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Labels, Annotations, DeleteTimeStamp update should enqueue, the earlier patch check object meta for easier comparison. But I found podsEqual has check labels and annotations , so I thin this would be easier and effective.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Why didn't you add labels and annotations here? Honestly, we should probably make the update pod callback on label and annotation change. -- that's work for a different time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sargun podsEqual has already checked labels and annotations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also check to make sure the deletion graceperiod is equal. and then I think it'll be good to go.

Copy link
Contributor

@sargun sargun Jun 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it can change, let's take this example:

Timestamp | Actor | Action
00:00:00 | User | Delete pod A, with Grace period of 30s
00:00:00 | K8s. | Sets Deletion timestamp to 00:00:30, and graceperiod to 30s
00:00:15 | User | Delete pod A, with Grace period of 15s
00:00:15 | K8s. | Sets Deletion timestamp to 00:00:30, and graceperiod to 15s

In this, the graceperiod can change without the Deletion Timestamp changing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a case, but it would not affect the final delete time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you think it is necessary, I would like update this PR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't hurt to add this check, IMHO.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR updated, PTAL

thanks

@cwdsuzhou cwdsuzhou requested a review from sargun June 10, 2020 09:55
@sargun
Copy link
Contributor

sargun commented Jun 11, 2020

One last question, and then we're good to go.

@sargun sargun merged commit 05fc1a4 into virtual-kubelet:master Jun 15, 2020
@shsjshentao
Copy link

shsjshentao commented Nov 3, 2021

I think it should not ignore podStatus change when enqueue, if i set graceful deletion and delete pod, then all containers has dead in advance, it should kill immediately instead of waiting graceful deletion.

@cwdsuzhou
Copy link
Contributor Author

I think it should not ignore podStatus change when enqueue, if i set graceful deletion and delete pod, then all containers has dead in advance, it should kill immediately instead of waiting graceful deletion.

You can take a look at these 2 PRs , may solve your issues.

#902
#874

@shsjshentao
Copy link

shsjshentao commented Nov 4, 2021

I think it should not ignore podStatus change when enqueue, if i set graceful deletion and delete pod, then all containers has dead in advance, it should kill immediately instead of waiting graceful deletion.

You can take a look at these 2 PRs , may solve your issues.

#902 #874

No, these 2 PRs cannot solve this problem.

For example:

00:00:00   
I delete a pod in k8s with 300 graceful seconds
vk provider delete real pod with 300 graceful seconds
vk enqueue k8s pod in deletion queue with delay 300 seconds

00:01:00    
all containers of vk has exited in advance
then real pod has deleted and missing
vk provider receive missing signal and update pod status to succeed with notifypods method
but nothing changed util 300 seconds

00:05:00   
vk delete pods in k8s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants