Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod stay pending even though node has become schedulable #149

Closed
criscola opened this issue Jun 19, 2023 · 5 comments
Closed

Pod stay pending even though node has become schedulable #149

criscola opened this issue Jun 19, 2023 · 5 comments

Comments

@criscola
Copy link

criscola commented Jun 19, 2023

Describe the problem

  1. There are no schedulable nodes
  2. A node becomes schedulable (metric becomes 1, similar to health_metric demo)
  3. I would like the Pending workloads to be scheduled on the schedulable node.

To Reproduce
Here's my policy:

apiVersion: telemetry.intel.com/v1alpha1
kind: TASPolicy
metadata:
  name: schedule-until-at-capacity
  namespace: default
spec:
  strategies:
    dontschedule:
      rules:
        - metricname: node_schedulable
          operator: Equals
          target: 0
    scheduleonmetric:
      rules:
        - metricname: node_schedulable
          operator: GreaterThan

node_schedulable is scraped from my own endpoint where I set 0 and 1 at will.

Expected behavior
I expect that the Pod gets scheduled after at least 1 node becomes schedulable (node_schedulable becomes 1 for that node).

Logs
When there are no schedulable nodes, ex from TAS:

I0619 12:37:38.462395       1 telemetryscheduler.go:211] "Filter request received" component="extender"
I0619 12:37:38.462799       1 strategy.go:43] "ecoqube-wkld-dev-default-worker-topo-ptlck-656fc68575x6l666dn4t node_schedulable = 0" component="controller"
I0619 12:37:38.462807       1 strategy.go:57] "ecoqube-wkld-dev-default-worker-topo-ptlck-656fc68575x6l666dn4t violating : node_schedulable Equals 0" component="controller"
I0619 12:37:38.462810       1 strategy.go:43] "ecoqube-wkld-dev-default-worker-topo-ptlck-656fc68575x6l66jl4qm node_schedulable = 0" component="controller"
I0619 12:37:38.462816       1 strategy.go:57] "ecoqube-wkld-dev-default-worker-topo-ptlck-656fc68575x6l66jl4qm violating : node_schedulable Equals 0" component="controller"
I0619 12:37:38.462818       1 strategy.go:43] "ecoqube-wkld-dev-default-worker-topo-ptlck-656fc68575x6l66wbw54 node_schedulable = 0" component="controller"
I0619 12:37:38.462821       1 strategy.go:57] "ecoqube-wkld-dev-default-worker-topo-ptlck-656fc68575x6l66wbw54 violating : node_schedulable Equals 0" component="controller"

Pod is pending:

$ kubectl get pods
default                   500m-cpu-stresstest-252504ce-trbm4                     0/1     Pending     0             55s

I wait until a node is schedulable, then I can spawn new Pods, but the Pending ones stay pending.

Environment (please complete the following information):
K8s version: v1.27.3
Deployed using Cluster API.

Additional context
I had another policy where pods would be scheduled once possible, so not sure why this different behavior now.

@madalazar
Copy link
Contributor

Hi,

Let's try something quick: the pods that are stuck in "Pending" are they stuck in stat state forever? What happens to them after 5-6 mins?
Could you also add the TAS & K8s default scheduler logs here?

Thank you

madalazar added a commit to madalazar/platform-aware-scheduling that referenced this issue Jul 4, 2023
Signed-off-by: Madalina Lazar <madalina.lazar@intel.com>
@jiriproX
Copy link

Hi,
Is seams that we hit the same issue. When pod is started while dontshedule policy is active then the pod get stuck in pending state. Even if we switch policy to scheduleonmetric pod remain in pending state. New pods are created properly. After 5 minutes pending pod wake up, shedule again and move to running state. Is that behaviour expected? Is it 5 minutes timeout somehow configurable?
We use tas version 0.5.0

@madalazar
Copy link
Contributor

@jiriproX As far as I know, that behaviour is expected and it's coming from this K8s default scheduler parameter: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/ --pod-max-in-unschedulable-pods-duration duration (Default: 5m0s).

As for the configurable part, it seems to be ... but with a caveat: "This flag is deprecated and will be removed in 1.26".

So, according to the docs is you use K8s < 1.26 you should be able to tune this value. Now, I looked at the release notes for 1.26 and this deprecation isn't mentioned https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#changes-by-kind-7 so I'd say this would need testing.

@criscola
Copy link
Author

criscola commented Jul 25, 2023

Hello, yes I confirm I'm having the same thing, after waiting something like 5 minutes, they get scheduled again. But I'm fairly sure with my previous cluster version (1.25, now using 1.27 at the time of the issue) they would be rescheduled immediately, which is the behavior I would like to have. Thanks for the info @madalazar I will need to look into this.

@uniemimu
Copy link
Collaborator

There are a number of optimizations in the k8s scheduler which prevent Pods from being attempted to be re-scheduled. Basically there are different queues inside the scheduler, and the scheduler then isn't telemetry savvy enough to retry the Pod scheduling just based on the fact that telemetry state has changed. But there is a 5 minute long last-resort fallback which retries unschedulable Pods periodically. Back in the days that delay was much shorter.

The good news is, that you can adjust the delay. The scheduler (deprecated) flag --pod-max-in-unschedulable-pods-duration duration Default: 5m0s

Don't be afraid of the deprecation. That flag has been deprecated for long and it isn't going away in 1.29 and perhaps not even in 1.30.

For conventional scheduler-plugins there is the EnqueueExtension API, but I'm not sure if that exists at all for extenders. And even if it did, there wouldn't be a telemetry based part in that API. But the gist of that API is that plugins can say what sort of changes in the cluster would trigger a move from the unschedulable queue to them other queues. For example a change in a node object could be such a trigger. But in the case of TAS, I suppose the only change TAS could do would be to update some label in the node.

Meanwhile, the cmd line flag is your workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants