New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
requiredDuringSchedulingRequiredDuringExecution status #96149
Comments
@Treyone: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig scheduling |
Are you referring to k8s docs? if so, please flag those so we can fix the docs. |
I believe |
Correct, it's not implemented yet. And due to its complexity, we don't have a concrete timeline to promise when to get it implemented. |
No, the doc is clear on the "planned" status. That were SO discussions like this one. |
/sig node |
It should during Pod creation, as the field doesn't exist in the API. Is this not the case? As for the implementation, it's more on kubelet side to do. This is somewhat similar to Taint based evictions, which, AFAIK, had several implementation issues. |
@alculquicondor Could you explain more about the "implementation issues"? I read some previous discussion, I think these conversations indicate the |
cc @damemi who was involved in taint based evictions |
I wasn't very involved in the early design for TBE, but part of the legwork to get it to GA was officially handing it off to the node team for ownership. So I agree with @alculquicondor here (and what you said, @lingsamuel) that this is probably similarly out of the scheduler's scope. This is because the scheduler is generally not concerned with pods after they've been placed on a node. Some of the discussion around this has suggested it to be explicitly implemented in descheduler's NodeAffinity strategy, at least temporarily until a core solution can be agreed upon. I think that should ultimately be something similar to TaintManager. |
I think the only thing related to |
@lingsamuel sure, opening a PR doesn't hurt :) This may be big enough to merit its own KEP though, @alculquicondor wdyt? |
Definitively requires a KEP, but not up to sig-scheduling (us) to approve |
@Treyone I am writing a KEP user-story, could you share more information about your usecase? When re-reading your description, I am confused why the scheduler couldn't re-schedule the crashed pod to the previous node. |
The affiinity rule I used is the following: affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: my-app-role
operator: In
values:
- master
topologyKey: kubernetes.io/hostname I need this pod to be co-located on the same host as the pod with this label. There's only one pod with this label, so I think only one node matched the rule, but maybe I'm missing something ? |
There are 2 Pods, say A and B. The rule above sets a dependency B->A If A gets removed, |
🤔 well I misunderstanding the context. I thought we are talking about the NodeAffinity. |
Indeed... What this issue is talking about is the "...RequiredDuringExecution" semantics of Pod(Anti)Affinity, which can be heavily relevant with performance - for any new pod and existing pod's update, you have to check if that breaks existing Pod(Anti)Affinity constraints. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
I am waiting someone to review the kep kubernetes/enhancements#2359 |
@lingsamuel @alculquicondor what is the status of this issue? Is there a work around until requiredDuringSchedulingRequiredDuringExecution will be available? |
You can check the KEP or take if over if nobody is working on it. |
@alculquicondor the KEP kubernetes/enhancements#2359 is about node affinity and not pod affinity. |
I'm not aware of any. Feel free to work on it. But note that revisions fall under sig/node, with sig/scheduling input. |
please make sure the KEP is recorded in the sig-node tracker and introduced in the sig-node mtg, because I can't recall this being mentioned in recent meetings (may very well be my fualt though). Anyway, since we have a KEP and a k/e issue it seems this issue is no longer needed. |
@fromanirh: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
added the KEP linked here to the sig-node feature backlog document |
what is the status of this feature in k8s? @fromanirh |
AFAIK no changes since last update |
/reopen |
@AxeZhan: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
For anyone who is still interested in this issue. I've created a new kep with POC code in k/k. Hope can get some reviewers/mentors from here 😊 |
Note that this mostly a sig-node feature, so try to get a review from that SIG first. |
Well, after the discussion in KEP. The main concern is, why is it necessary to add a controller in k8s when the descheduler can do the same thing? So I would like to ask for your opinions @alculquicondor @Huang-Wei . |
To me, this sounds like a basic feature that should be part kubernetes. We already have a similar mechanism for taints.
That is very hard to balance. I think we should implement it, but it's not critical. |
my 2c (FWIW): I raised this comment on the KEP review, but I actually don't have any strong opinion either way. If we move the functionality in core k/k, we however need a good and clear rationale about why we do so (which is, again, something I'm fine with but which deserves a rationale), and make very clear what's the overlap with scheduler, transition plan (for users of descheduler) and if/how the feature and descheduler will coexist or not. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
requiredDuringSchedulingRequiredDuringExecution
is mentioned in the documentation as "In the future we plan to offer",but I could not find any plan to do so in a near future. None of the workarounds I found until now could help us fixing a scheduling problem where
requiredDuringSchedulingIgnoredDuringExecution
is insufficient. We have a constraint for some replicas of a deployment to be co-located on the same node.requiredDuringSchedulingIgnoredDuringExecution
does the job for initial deployment, but whenever one of these pods crashes, he will most probably be re-scheduled somewhere else, crashing the whole application.What is also disturbing is that I found some resources mentioning it as working and when I tried, there was no error
when I applied it: the
affinity
section was just left blank without notice, both in the updated deployment and pods. If the feature is notsupported, I doubt this is the intended behavior, is it?
The text was updated successfully, but these errors were encountered: