-
Notifications
You must be signed in to change notification settings - Fork 614
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid enqueue when status of k8s pods change #835
Conversation
22b5202
to
3d60b97
Compare
@cpuguy83 PTAL We should ignore podStatus change, it not. too many useless enqueue here. |
node/podcontroller.go
Outdated
pc.k8sQ.AddRateLimited(key) | ||
// ignore ResourceVersion change | ||
oldPod.ResourceVersion = newPod.ResourceVersion | ||
if !cmp.Equal(oldPod.Spec, newPod.Spec) || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you're trying to achieve update suppression, you can use:
Line 103 in a67cfab
func podsEqual(pod1, pod2 *corev1.Pod) bool { |
Also, you need to check deletion timestamp and phase change IIRC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, wo should Ignore status update here. But we need check spec
, deleteTime
, Annotation
, Labels
and so on. So I exclude pod status comparison
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a more targeted equal method that tests the super set of the attributes of podequal. Something like:
podNeedsWork(old, new *v1.pod) bool {
if !podsEqual(...) { return true };
if ... return true
return false
}
I'm curious, where is the perf concern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks, PTAL thanks
node/podcontroller.go
Outdated
newPod := newObj.(*corev1.Pod) | ||
|
||
// At this point we know that something in .metadata or .spec has changed, so we must proceed to sync the pod. | ||
if key, err := cache.MetaNamespaceKeyFunc(newPod); err != nil { | ||
log.G(ctx).Error(err) | ||
} else { | ||
pc.k8sQ.AddRateLimited(key) | ||
// ignore ResourceVersion change | ||
oldPod.ResourceVersion = newPod.ResourceVersion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aren't these cache references? So you're mutating the cache here?
node/podcontroller.go
Outdated
pc.k8sQ.AddRateLimited(key) | ||
// ignore ResourceVersion change | ||
oldPod.ResourceVersion = newPod.ResourceVersion | ||
if !cmp.Equal(oldPod.Spec, newPod.Spec) || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a more targeted equal method that tests the super set of the attributes of podequal. Something like:
podNeedsWork(old, new *v1.pod) bool {
if !podsEqual(...) { return true };
if ... return true
return false
}
I'm curious, where is the perf concern?
Somehow my review got split. None the less, I think this is a fine idea. |
3d60b97
to
732b352
Compare
@sargun PR updated, thanks |
if !podsEqual(oldPod, newPod) { | ||
return true | ||
} | ||
if !oldPod.DeletionTimestamp.Equal(newPod.DeletionTimestamp) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the earlier patch, you checked all of object meta. Any reason for this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Labels, Annotations, DeleteTimeStamp update should enqueue, the earlier patch check object meta for easier comparison. But I found podsEqual has check labels and annotations , so I thin this would be easier and effective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Why didn't you add labels and annotations here? Honestly, we should probably make the update pod callback on label and annotation change. -- that's work for a different time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sargun podsEqual
has already checked labels and annotations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also check to make sure the deletion graceperiod is equal. and then I think it'll be good to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it can change, let's take this example:
Timestamp | Actor | Action
00:00:00 | User | Delete pod A, with Grace period of 30s
00:00:00 | K8s. | Sets Deletion timestamp to 00:00:30, and graceperiod to 30s
00:00:15 | User | Delete pod A, with Grace period of 15s
00:00:15 | K8s. | Sets Deletion timestamp to 00:00:30, and graceperiod to 15s
In this, the graceperiod can change without the Deletion Timestamp changing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a case, but it would not affect the final delete time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you think it is necessary, I would like update this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't hurt to add this check, IMHO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR updated, PTAL
thanks
One last question, and then we're good to go. |
732b352
to
3db9ab9
Compare
I think it should not ignore podStatus change when enqueue, if i set graceful deletion and delete pod, then all containers has dead in advance, it should kill immediately instead of waiting graceful deletion. |
No, these 2 PRs cannot solve this problem. For example:
|
No description provided.