-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't update the Pod object after each scheduling attempt #73700
Don't update the Pod object after each scheduling attempt #73700
Conversation
a47a81a
to
bc5fe08
Compare
bc5fe08
to
4495fe9
Compare
/assign @bsalamat |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @denkensk! Looks good overall. I have some comments about he implementation.
@@ -260,15 +260,34 @@ func podTimestamp(pod *v1.Pod) *metav1.Time { | |||
return &condition.LastProbeTime | |||
} | |||
|
|||
// podInfo is minimum cell in the activeQ or the unschedulableQ. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of pointing to activeQ or unschedulableQ, just use "scheduling queue" please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you.
// podInfo is minimum cell in the activeQ or the unschedulableQ. | ||
type podInfo struct { | ||
pod *v1.Pod | ||
// The time pod added to the activeQ or the unschedulableQ. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment about activeQ and unschedulableQ.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you.
type podInfo struct { | ||
pod *v1.Pod | ||
// The time pod added to the activeQ or the unschedulableQ. | ||
addTime time.Time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would use queuedTime
or just simply timestamp
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you.
addTime time.Time | ||
} | ||
|
||
func getPodInfo(pod *v1.Pod, clock util.Clock) *podInfo { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get...
gives the impression that this is a getter, but it is not. It is actually a factory that builds a podInfo object. I would rename it to newPodInfo
or makePodInfo
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given how this function is used and the clock that is passed to it, I think it makes sense for the function to be a method of PriorityQueue
. That way, you don't need to pass the clock to it every time. You can have a different version of it, for example newPodInfoNoTimestamp
or something like that, for the cases that you want to avoid getting the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done Thanks for your suggestions.
// that a pod determined as unschedulable multiple times doesn't block any newer pod. | ||
// This behavior ensures that an unschedulable pod does not block head of the queue when there | ||
// are frequent events that move pods to the active queue. | ||
func TestPodFailedSchedulingMultipleTimesDoesNotBlockNewerPod(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this test deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously, I deleted this test because it was introduced through the previous pr. But now I think this test can be retained. I made some modifications. Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I missed an important problem in this PR in my first round of reviews. The timestamp of the pod should generally reflect the last time the scheduler has tried scheduling the pod. The time that the pod is added to the queue is a pretty good estimation of the time that the scheduler has tried scheduling the pod. Or, when the pod is recently created, the time it is added to the queue is good estimation of the pod creation time. That's why the time that the pod is added to the scheduling queue is what we are using here.
This time should be recorded when the pod is added to the queue and must not be changed as long as the pod is in the scheduling queue, no matter which substructure of the scheduling queue it moves to. In the current PR, the time is updated every time the pod is added to the activeQ. So, a pod that was tried a long time ago and a pod that is recently tried get almost the same timestamp when they are moved to the activeQ after an event that moves all to the activeQ.
The timestamp should exists in unschedulable and backoff queues too and shouldn't be changed until the pod is removed from the queue. |
Thanks for your review, I also agree that the timestamp shouldn't be changed until the pod is removed from the queue. I will add the timestamp to backoff queues like unschedulableQ. I will implementate that as soon as possible. Thank you again for your help. |
cd511eb
to
0039575
Compare
0039575
to
db3b8ef
Compare
Kindly ping @bsalamat PTAL~ thank you~ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @denkensk.
Your algorithm should add a timestamp to a pod when it is added to the scheduling queue and shouldn't change that timestamp as long as the pod is in the queue, no matter which sub-queue it is being moved to. Please see my comments for more details.
p.nominatedPods.update(oldPod, newPod) | ||
err := p.activeQ.Update(newPod) | ||
err := p.activeQ.Update(p.newPodInfo(newPod)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should keep the pod's existing timestamp here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. I keep the pod's existing timestamp when the pod is updated. Thank you.
p.podBackoffQ.Delete(newPod) | ||
err := p.activeQ.Add(newPod) | ||
p.podBackoffQ.Delete(newPodInfoNoTimestamp(oldPod)) | ||
err := p.activeQ.Add(p.newPodInfo(newPod)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should keep existing timestamp here too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -562,7 +576,7 @@ func (p *PriorityQueue) Update(oldPod, newPod *v1.Pod) error { | |||
// If the pod is updated reset backoff | |||
p.clearPodBackoff(newPod) | |||
p.unschedulableQ.delete(usPod) | |||
err := p.activeQ.Add(newPod) | |||
err := p.activeQ.Add(p.newPodInfo(newPod)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks~ Done
func (p *PriorityQueue) movePodsToActiveQueue(podInfoList []*podInfo) { | ||
for _, pInfo := range podInfoList { | ||
pod := pInfo.pod | ||
if p.isPodBackingOff(pInfo.pod) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use pod
instead of pInfo.pod
here and below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks~ Done
db3b8ef
to
ebdbb57
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @bsalamat for your review. I addressed your comments because of "force push".
err := p.activeQ.Update(newPod) | ||
newPodInfo := p.newPodInfo(newPod) | ||
newPodInfo.timestamp = oldPodInfo.(*podInfo).timestamp | ||
err := p.activeQ.Update(newPodInfo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. I keep the pod's existing timestamp when the pod is updated. Thank you.
p.podBackoffQ.Delete(newPodInfoNoTimestamp(oldPod)) | ||
newPodInfo := p.newPodInfo(newPod) | ||
newPodInfo.timestamp = oldPodInfo.(*podInfo).timestamp | ||
err := p.activeQ.Add(newPodInfo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
if p.isPodBackingOff(pod) { | ||
if err := p.podBackoffQ.Add(pod); err != nil { | ||
if err := p.podBackoffQ.Add(pInfo); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Thank you.
p.unschedulableQ.delete(usPod) | ||
err := p.activeQ.Add(newPod) | ||
p.unschedulableQ.delete(usPodInfo.pod) | ||
err := p.activeQ.Add(usPodInfo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @denkensk. Please see my comments. They are mostly minor comments. I will review the test after they are addressed.
@@ -332,13 +346,13 @@ func (p *PriorityQueue) AddIfNotPresent(pod *v1.Pod) error { | |||
if p.unschedulableQ.get(pod) != nil { | |||
return nil | |||
} | |||
if _, exists, _ := p.activeQ.Get(pod); exists { | |||
if _, exists, _ := p.activeQ.Get(newPodInfoNoTimestamp(pod)); exists { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
@@ -306,7 +320,7 @@ func (p *PriorityQueue) run() { | |||
func (p *PriorityQueue) Add(pod *v1.Pod) error { | |||
p.lock.Lock() | |||
defer p.lock.Unlock() | |||
if err := p.activeQ.Add(pod); err != nil { | |||
if err := p.activeQ.Add(p.newPodInfo(pod)); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In these functions that a podInfo
object is needed, please make the object once by calling newPodInfoNoTimestamp
or newPodInfo
and store it in a local variable and use that variable instead of calling the "constructors" multiple times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
@@ -405,10 +419,10 @@ func (p *PriorityQueue) AddUnschedulableIfNotPresent(pod *v1.Pod, podSchedulingC | |||
if p.unschedulableQ.get(pod) != nil { | |||
return fmt.Errorf("pod is already present in unschedulableQ") | |||
} | |||
if _, exists, _ := p.activeQ.Get(pod); exists { | |||
if _, exists, _ := p.activeQ.Get(newPodInfoNoTimestamp(pod)); exists { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
p.nominatedPods.update(oldPod, newPod) | ||
err := p.activeQ.Update(newPod) | ||
newPodInfo := p.newPodInfo(newPod) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you are updating the timestamp in the next line, you can use newPodInfoNoTimestamp
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
@@ -537,17 +551,21 @@ func (p *PriorityQueue) Update(oldPod, newPod *v1.Pod) error { | |||
|
|||
if oldPod != nil { | |||
// If the pod is already in the active queue, just update it there. | |||
if _, exists, _ := p.activeQ.Get(oldPod); exists { | |||
if oldPodInfo, exists, _ := p.activeQ.Get(newPodInfoNoTimestamp(oldPod)); exists { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as previous comments, please call newPodInfoNoTimestamp(oldPod)
once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
p.nominatedPods.update(oldPod, newPod) | ||
p.podBackoffQ.Delete(newPod) | ||
err := p.activeQ.Add(newPod) | ||
p.podBackoffQ.Delete(newPodInfoNoTimestamp(newPod)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not your code, but we should delete oldPod
, not newPod
, from backoffQ. Given that a Pod name or namespace cannot be updated and we use a pod name and namespace as the key to the map, it shouldn't matter which one of newPod
or oldPod
we use, but for making the logic clear, we should use oldPod
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, yeah, I had some doubts when I first started looking at this code. Change to `oldpod', and the logic becomes clear.
p.unschedulableQ.delete(usPod) | ||
err := p.activeQ.Add(newPod) | ||
p.unschedulableQ.delete(usPodInfo.pod) | ||
err := p.activeQ.Add(usPodInfo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is incorrect. We should create a new podInfo from newPod
, update its timestamp with the timestamp of usPodInfo
and add the new podInfo to the queue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
} | ||
|
||
// Add adds a pod to the unschedulable pods. | ||
// Add adds a pod to the unschedulable podInfoMap. | ||
func (u *UnschedulablePodsMap) addOrUpdate(pod *v1.Pod) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change the argument of this function to a *podInfo
instead of a *Pod
. That way, you don't need to add a clock
to UnschedulablePodsMap
. All callers of this function are inside PriorityQueue
which can build a podInfo
by calling newPodInfo
of PriorityQueue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
// podInfoMap is a map key by a pod's full-name and the value is a pointer to the podInfo. | ||
podInfoMap map[string]*podInfo | ||
keyFunc func(*v1.Pod) string | ||
clock util.Clock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment on addOrUpdate
below. I don't think we need a clock here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @denkensk. Code looks good. Test needs improvement.
} | ||
|
||
func getUnschedulablePod(p *PriorityQueue, pod *v1.Pod) *v1.Pod { | ||
p.lock.Lock() | ||
defer p.lock.Unlock() | ||
return p.unschedulableQ.get(pod) | ||
if p.unschedulableQ.get(pod) != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use:
if p.unschedulableQ.get(pod) != nil { | |
pInfo := p.unschedulableQ.get(pod) | |
if pInfo != nil { | |
return pInfo.pod | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
Status: v1.ConditionFalse, | ||
Reason: v1.PodReasonUnschedulable, | ||
Message: "fake scheduling failure", | ||
LastProbeTime: metav1.Now(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove LastProbeTime here, but please keep the rest of the code. Otherwise, the scheduling queue will not consider the pod as "unschedulable".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
@@ -841,15 +836,6 @@ func TestPodFailedSchedulingMultipleTimesDoesNotBlockNewerPod(t *testing.T) { | |||
}, | |||
} | |||
q.Add(&newerPod) | |||
|
|||
// And then unschedulablePod was determined as unschedulable AGAIN. | |||
podutil.UpdatePodCondition(&unschedulablePod.Status, &v1.PodCondition{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment here and below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
@@ -990,3 +960,66 @@ func TestHighProirotyFlushUnschedulableQLeftover(t *testing.T) { | |||
t.Errorf("Expected: %v after Pop, but got: %v", medPriorityPod.Name, p.Name) | |||
} | |||
} | |||
|
|||
// TestPodInfoInActiveQ tests that the pods with the same priority is sorted by the podInfo.timestamp. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/is/are/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
@@ -990,3 +960,66 @@ func TestHighProirotyFlushUnschedulableQLeftover(t *testing.T) { | |||
t.Errorf("Expected: %v after Pop, but got: %v", medPriorityPod.Name, p.Name) | |||
} | |||
} | |||
|
|||
// TestPodInfoInActiveQ tests that the pods with the same priority is sorted by the podInfo.timestamp. | |||
func TestPodInfoInActiveQComp(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to test many more scenarios in this test, such as updating a pod in the activeQ, adding unschedulabe pod to the unschelableQ and then triggering a move request to the activeQ and ensuring that the timestamp is not updated, another similar test to check that when a pod goes to backoffQ and then the activeQ its timestamp remains the same, ....
You may want to take a look at this test to get ideas about how you can add many test scenarios involving various operations in this test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your help. I add many more scenarios in this test about activeQ/unschedulabeQ/backoffQ. If there's anything else missing, you can comment and I'll add it as soon as possible.
664e678
to
fa8c3df
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @denkensk. Looks good. Just a few minor comments.
expected []*podInfo | ||
}{ | ||
{ | ||
name: "add two pod to activeQ and pod them by the timestamp", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/pod them/sort them/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
@@ -990,3 +979,135 @@ func TestHighProirotyFlushUnschedulableQLeftover(t *testing.T) { | |||
t.Errorf("Expected: %v after Pop, but got: %v", medPriorityPod.Name, p.Name) | |||
} | |||
} | |||
|
|||
// TestHighProirotyPodInfo tests the operations related to podInfo. | |||
func TestHighProirotyPodInfo(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rename this to something like TestPodTimestamp
or something like that. In this test we don't care much about priority. So, having "HighPriority" in the name of the test is not necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
{ | ||
name: "add one pod to BackoffQ and move it to activeQ", | ||
operations: []operation{ | ||
addPodBackoffQ(pInfo1), flushBackoffQ(), moveAllToActiveQ(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add pInfo2
to the activeQ in this test case and check that pInfo2
is after pInfo1
in the expected output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
addOrUpdateUnschedulablePod(q, &midPod) | ||
|
||
// Update pod condition to highPod. | ||
podutil.UpdatePodCondition(&highPod.Status, &v1.PodCondition{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep these status updates and move them above the lines that add pods to the unschedulableQ.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thank you
fa8c3df
to
0b945bc
Compare
0b945bc
to
ea9e1a4
Compare
return func() { | ||
queue.lock.Lock() | ||
defer queue.lock.Unlock() | ||
queue.podBackoffQ.Add(pInfo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bsalamat thank you~ Here I add comments to facilitate your review. Add lock upper and here to prevent data race occuring in https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73700/pull-kubernetes-bazel-test/76841/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot, @denkensk!
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bsalamat, denkensk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Thank you very much for your patience and your help. @bsalamat |
looks like this introduced a flake in unit tests: first occurrence: https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73700/pull-kubernetes-bazel-test/76761#pkgschedulerinternalqueuego_default_test |
What type of PR is this?
/kind bug
/sig scheduling
/priority important-soon
What this PR does / why we need it:
We recently added a PR to put a timestamp on a Pod object every time it is attempted by the scheduler. However, this change could increase the number of pod updates and could raise the load of the API server in some scenarios. This is particularly problematic in clusters with a lot of unschedulable pods that remain unschedulable for a long time and getting attempted many times.
In fact, we don't need to keep this timestamp in the API object. This is needed only for the scheduler to keep track of the time that a pod is attempted. In other words, this timestamp can be kept in the scheduler's internal state only. We should add a timestamp to every pod in the scheduling queue. The timestamp will be the time that the pod is added to the queue. The scheduler should use this timestamp for sorting pods with the same priority. By doing this we will not update the API object after every scheduling attempt.
When I went to solve this issue, I find flush UnschedulableQ Leftover to activeQ in 30s depends on the timestamp the pod is attempted by the scheduler. So In addition to recording the time when pod joined activeq, I also recorded the time when pod joined unschedulerq.
Which issue(s) this PR fixes:
Fixes #73485
Special notes for your reviewer: