-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kube scheduler panic #544
Comments
Thank you for your report. I tried it like below but I can't reproduce it.
Applied manifest below.
I got the result but not reproduced.
|
Hello @llamerada-jp, Thanks for your fast reaction. I should have emphasized it more but the panic occurs when the pod to be scheduled has a priorityClassName (a high one, see my manifest) as the panic is triggered in preemption code apparently. Your Pod manifest doesn’t specify a priority class. |
Hello @llamerada-jp, You can use the shipped-with priority class |
@machine424 |
Hello @llamerada-jp Yes, as I've already mentioned, it seems to be related to preemption, I reproduce this with:
preemption code would run and fail because of topolvm-scheduler input. Hope this helps. |
@machine424 |
This issue has been automatically marked as stale because it has not had any activity for 30 days. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please feel free to reopen this issue (or open a new one) if this still requires investigation. Thank you for your contribution. |
Hello,
Describe the bug
When trying to schedule a Pod (for the first time) with a high kube priority class (https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/) and no node has enough free disk for it, the kube scheduler crashes.
Environments
To Reproduce
Expected behavior
The scheduler extender should not crash kube scheduler, looks the same as kubernetes/kubernetes#101548.
The kube scheduler can protect itself against this but only starting from v1.22: kubernetes/kubernetes#101560
The scheduler should not evict any pod as this will not free out disk.
The pod should and does stay in pending state, because no node is available to fulfil its requests.
Didn't make a test with available disk but not enough cpu/ram on the node, don't know how the preemption code will react.
The text was updated successfully, but these errors were encountered: