-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod stay pending even though node has become schedulable #149
Comments
Hi, Let's try something quick: the pods that are stuck in "Pending" are they stuck in stat state forever? What happens to them after 5-6 mins? Thank you |
Signed-off-by: Madalina Lazar <madalina.lazar@intel.com>
Hi, |
@jiriproX As far as I know, that behaviour is expected and it's coming from this K8s default scheduler parameter: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/ As for the configurable part, it seems to be ... but with a caveat: "This flag is deprecated and will be removed in 1.26". So, according to the docs is you use K8s < 1.26 you should be able to tune this value. Now, I looked at the release notes for 1.26 and this deprecation isn't mentioned https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#changes-by-kind-7 so I'd say this would need testing. |
Hello, yes I confirm I'm having the same thing, after waiting something like 5 minutes, they get scheduled again. But I'm fairly sure with my previous cluster version (1.25, now using 1.27 at the time of the issue) they would be rescheduled immediately, which is the behavior I would like to have. Thanks for the info @madalazar I will need to look into this. |
There are a number of optimizations in the k8s scheduler which prevent Pods from being attempted to be re-scheduled. Basically there are different queues inside the scheduler, and the scheduler then isn't telemetry savvy enough to retry the Pod scheduling just based on the fact that telemetry state has changed. But there is a 5 minute long last-resort fallback which retries unschedulable Pods periodically. Back in the days that delay was much shorter. The good news is, that you can adjust the delay. The scheduler (deprecated) flag --pod-max-in-unschedulable-pods-duration duration Default: 5m0s Don't be afraid of the deprecation. That flag has been deprecated for long and it isn't going away in 1.29 and perhaps not even in 1.30. For conventional scheduler-plugins there is the EnqueueExtension API, but I'm not sure if that exists at all for extenders. And even if it did, there wouldn't be a telemetry based part in that API. But the gist of that API is that plugins can say what sort of changes in the cluster would trigger a move from the unschedulable queue to them other queues. For example a change in a node object could be such a trigger. But in the case of TAS, I suppose the only change TAS could do would be to update some label in the node. Meanwhile, the cmd line flag is your workaround. |
Describe the problem
To Reproduce
Here's my policy:
node_schedulable is scraped from my own endpoint where I set 0 and 1 at will.
Expected behavior
I expect that the Pod gets scheduled after at least 1 node becomes schedulable (node_schedulable becomes 1 for that node).
Logs
When there are no schedulable nodes, ex from TAS:
Pod is pending:
I wait until a node is schedulable, then I can spawn new Pods, but the Pending ones stay pending.
Environment (please complete the following information):
K8s version: v1.27.3
Deployed using Cluster API.
Additional context
I had another policy where pods would be scheduled once possible, so not sure why this different behavior now.
The text was updated successfully, but these errors were encountered: