New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support topologySpreadConstraints #2192
Comments
You can already set a This is what I use: affinity:
topologyKey: topology.kubernetes.io/zone |
Yes this will in theory work but in a smaller cluster lets say 4 nodes? there's a good chance two replicas will be scheduled on the same node since the hostname is no longer the topologyKey for the pod affinity. |
Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
@ColonelBundy How can two replicas be scheduled on the same node, when they wouldn't be allowed in the same zone? |
Right, I misspoke. What I meant was if I have two zones I would be limited to two replicas. With topologySpreadConstraints I can fulfill that requirement and once it's fulfilled I can scheduler more replicas in the same zone. This cannot be done with pod affinity I'm afraid. |
Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
Since Kubernetes 1.19 the Topology Constraint went to GA, meaning that we can start using it, but this wasn't needed before, now with the latest version 1.26 and 1.27 many things were improved and now we can implement Topology Constraints. Some features may not work since they're available only on 1.26 or above, for any reference please read https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ to get more information. Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com> Signed-off-by: Jaime Silvela <jaime.silvela@enterprisedb.com> Co-authored-by: Jaime Silvela <jaime.silvela@enterprisedb.com> (cherry picked from commit 50601b0)
Since Kubernetes 1.19 the Topology Constraint went to GA, meaning that we can start using it, but this wasn't needed before, now with the latest version 1.26 and 1.27 many things were improved and now we can implement Topology Constraints. Some features may not work since they're available only on 1.26 or above, for any reference please read https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ to get more information. Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com> Signed-off-by: Jaime Silvela <jaime.silvela@enterprisedb.com> Co-authored-by: Jaime Silvela <jaime.silvela@enterprisedb.com>
Since Kubernetes 1.19 the Topology Constraint went to GA, meaning that we can start using it, but this wasn't needed before, now with the latest version 1.26 and 1.27 many things were improved and now we can implement Topology Constraints. Some features may not work since they're available only on 1.26 or above, for any reference please read https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ to get more information. Closes #2192 Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com> Signed-off-by: Jaime Silvela <jaime.silvela@enterprisedb.com> Co-authored-by: Jaime Silvela <jaime.silvela@enterprisedb.com>
@ColonelBundy all the credit to @armru :) |
Currently we only have affinity rules to play with to control scheduling. In order to guarantee that a replica and primary don't get scheduled in the same zone we need to be able to specify topologySpreadConstraints.
You currently can limit the zone using affinity rules, but it's not enough, with topologySpreadConstraints you can control this a lot better.
Reference: https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
The text was updated successfully, but these errors were encountered: