Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Disperse Primary Pods equally across Kubernetes Compute Nodes #4369

Open
2 tasks done
labkey-stuartm opened this issue Apr 24, 2024 · 1 comment
Open
2 tasks done
Assignees
Labels
triage Pending triage

Comments

@labkey-stuartm
Copy link

Is there an existing issue already for this feature request/idea?

  • I have searched for an existing issue, and could not find anything. I believe this is a new feature request to be evaluated.

What problem is this feature going to solve? Why should it be added?

While running multiple CNPG clusters on a given kubernetes cluster, the operator tends to lump the primary CNPG pods on a given kubernetes node. This can impact overall kubernetes cluster performance as all the PG read/write traffic is directed at a specific compute node.

This behavior also occurs when you perform kubernetes node maintenance and life cycle out older nodes with newer ones, the primary's tend to end up on the same node.

This can be problematic when you run dedicated kubernetes clusters with dozens of CNPG clusters.

Describe the solution you'd like

As a CNPG administrator, I would like the CNPG operator to place the primary CNPG pods across the kubernetes compute nodes in a round-robin fashion.

Alternatively,
As a CNPG administrator, I would like to be able to issue a single kubectl CNPG operator command to request that the CNPG operator to disperse the primary CNPG pods across the kubernetes compute nodes in a round-robin fashion. For example: kubectl cnpg disperse primary --all-namespaces

Describe alternatives you've considered

Manually promoting primary's using the kubectl cnpg promote command. This works fine if you have 4-5 clusters, but becomes difficult if you have 50+.

Additional context

No response

Backport?

No

Are you willing to actively contribute to this feature?

No

Code of Conduct

  • I agree to follow this project's Code of Conduct
@Ornias1993
Copy link

To get somewhat closer to this, you can set topologyspreadconstraints to prefer nodes that do not run a CNPG pod yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage Pending triage
Projects
None yet
Development

No branches or pull requests

3 participants