-
Notifications
You must be signed in to change notification settings - Fork 41.7k
add event when pod specify a not exist schedulerName #117407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
Hi @olderTaoist. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: olderTaoist The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/cc PTAL @chendave @sanposhiho |
|
@olderTaoist: GitHub didn't allow me to request PR reviews from the following users: PTAL. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/assign |
pkg/scheduler/eventhandlers.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot go with this implementation. This scheduler doesn't match with SchedulerName, it doesn't mean the cluster doesn't have the scheduler matched with SchedulerName
Other schedulers may be working in another Pod/process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need notify the SchedulerName not in kube-scheduler regardless of whether other schedulers are deployed, as for other scheduler(kube-batch、volcano) we can do nothing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
regardless of whether other schedulers are deployed
I don't agree. SchedulerName is the API to define which scheduler is responsible for the Pod, and if SchedulerName doesn't match with this scheduler, ignores that Pod: this behavior does completely make sense.
If we record FailedScheduling here, how another scheduler (matching with SchedulerName) could work for this Pod?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe the FailedScheduling maybe confuse,i chage the content of event message. as fellow
0s Warning FindScheduler pod/web-0 test scheduler not in [default-scheduler]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will cause multiple warning events if there are multiple schedulers, and actually someone scheduler match the given SchedulerName
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, I would say we cannot go with this. As @lowang-bh said, if multiple schedulers are there, all Pods will get the warning events even if everything works well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @sanposhiho, we shouldn't record an event because the even stream is shared between all controllers handling the pod, and so this will be confusing because the assumption here is that the other scheduler will pick up the pod and handle it. Also, here you are recording an event on each update, which will spam the event stream.
I would be onboard with a log that gets printed on add only (this will probably require adding a dedicated event handler though).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @sanposhiho, we shouldn't record an event because the even stream is shared between all controllers handling the pod, and so this will be confusing because the assumption here is that the other scheduler will pick up the pod and handle it. Also, here you are recording an event on each update, which will spam the event stream.
i consider the scenario where the update event occurs,because the && operator, don't record a event when pod was scheduled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the scenarios I encountered, there were few custom schedulers based on kube-schedule implementations, more like kube-batch、hived、volcano.
Do you have any good methods?let me try
eea2d34 to
6dcdd17
Compare
|
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
|
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #116982
Special notes for your reviewer:
Does this PR introduce a user-facing change?