Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support topologySpreadConstraints #2192

Closed
ColonelBundy opened this issue Jun 2, 2023 · 6 comments · Fixed by #2202
Closed

Support topologySpreadConstraints #2192

ColonelBundy opened this issue Jun 2, 2023 · 6 comments · Fixed by #2202

Comments

@ColonelBundy
Copy link

ColonelBundy commented Jun 2, 2023

Currently we only have affinity rules to play with to control scheduling. In order to guarantee that a replica and primary don't get scheduled in the same zone we need to be able to specify topologySpreadConstraints.

You currently can limit the zone using affinity rules, but it's not enough, with topologySpreadConstraints you can control this a lot better.

Reference: https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/

@itay-grudev
Copy link
Contributor

itay-grudev commented Jun 2, 2023

You can already set a topologyKey that will ensure that two pods are not scheduled in the same zone. See https://cloudnative-pg.io/documentation/current/api_reference/#AffinityConfiguration

This is what I use:

  affinity:
    topologyKey: topology.kubernetes.io/zone

@ColonelBundy
Copy link
Author

You can already set a topologyKey that will ensure that two pods are not scheduled in the same zone. See https://cloudnative-pg.io/documentation/current/api_reference/#AffinityConfiguration

This is what I use:

  affinity:
    topologyKey: topology.kubernetes.io/zone

Yes this will in theory work but in a smaller cluster lets say 4 nodes? there's a good chance two replicas will be scheduled on the same node since the hostname is no longer the topologyKey for the pod affinity.

armru added a commit that referenced this issue Jun 5, 2023
Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
@armru armru removed their assignment Jun 7, 2023
litaocdl pushed a commit that referenced this issue Jun 8, 2023
Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
@itay-grudev
Copy link
Contributor

itay-grudev commented Jun 8, 2023

@ColonelBundy How can two replicas be scheduled on the same node, when they wouldn't be allowed in the same zone?

@ColonelBundy
Copy link
Author

@ColonelBundy How can two replicas be scheduled on the same node, when they wouldn't be allowed in the same zone?

Right, I misspoke. What I meant was if I have two zones I would be limited to two replicas. With topologySpreadConstraints I can fulfill that requirement and once it's fulfilled I can scheduler more replicas in the same zone. This cannot be done with pod affinity I'm afraid.
On top of that I can combine constraints such as that I shouldn't be allowed to put more replicas on the same node until all other nodes have replicas of which cannot be done with affinity either.

jsilvela pushed a commit that referenced this issue Jun 9, 2023
Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
jsilvela pushed a commit that referenced this issue Jun 9, 2023
Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
sxd pushed a commit that referenced this issue Jun 9, 2023
Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
sxd pushed a commit that referenced this issue Jun 10, 2023
Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
@sxd sxd closed this as completed in 50601b0 Jun 10, 2023
cnpg-bot pushed a commit that referenced this issue Jun 10, 2023
Since Kubernetes 1.19 the Topology Constraint went to GA, meaning that we can
start using it, but this wasn't needed before, now with the latest version 1.26 and 1.27
many things were improved and now we can implement Topology Constraints.

Some features may not work since they're available only on 1.26 or above, for any reference
please read https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
to get more information.

Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
Signed-off-by: Jaime Silvela <jaime.silvela@enterprisedb.com>
Co-authored-by: Jaime Silvela <jaime.silvela@enterprisedb.com>
(cherry picked from commit 50601b0)
sxd pushed a commit that referenced this issue Jun 10, 2023
Since Kubernetes 1.19 the Topology Constraint went to GA, meaning that we can
start using it, but this wasn't needed before, now with the latest version 1.26 and 1.27
many things were improved and now we can implement Topology Constraints.

Some features may not work since they're available only on 1.26 or above, for any reference
please read https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
to get more information.

Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
Signed-off-by: Jaime Silvela <jaime.silvela@enterprisedb.com>
Co-authored-by: Jaime Silvela <jaime.silvela@enterprisedb.com>
sxd pushed a commit that referenced this issue Jun 10, 2023
Since Kubernetes 1.19 the Topology Constraint went to GA, meaning that we can
start using it, but this wasn't needed before, now with the latest version 1.26 and 1.27
many things were improved and now we can implement Topology Constraints.

Some features may not work since they're available only on 1.26 or above, for any reference
please read https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
to get more information.

Closes #2192

Signed-off-by: Armando Ruocco <armando.ruocco@enterprisedb.com>
Signed-off-by: Jaime Silvela <jaime.silvela@enterprisedb.com>
Co-authored-by: Jaime Silvela <jaime.silvela@enterprisedb.com>
@ColonelBundy
Copy link
Author

@armru @sxd Thanks for the quick implementation! Greatly appreciated!

@sxd
Copy link
Member

sxd commented Jun 16, 2023

@ColonelBundy all the credit to @armru :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants