Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[INFO] Cluster limits #1863

Closed
tom-asmblr opened this issue Jun 16, 2023 · 1 comment
Closed

[INFO] Cluster limits #1863

tom-asmblr opened this issue Jun 16, 2023 · 1 comment

Comments

@tom-asmblr
Copy link
Contributor

Hello, thanks again for liqo, it's working fantastically for us so far.

We've implemented a 'hub and spoke' style cluster setup, where we have one main cluster and a set of worker clusters to pick up the pods. That means all our 'top level' definitions (deployments etc) live in our main cluster, while all the real pods are distributed through the various regions.

My question pertains to individual cluster limits. (see https://kubernetes.io/docs/setup/best-practices/cluster-large) - Obviously the main cluster sees the worker clusters as 'nodes' and thinks (as far as I can tell) that the "fake" pods running in the main cluster are real. We've already tested the "No more than 110 pods per node" limit by scheduling a thousand or so pods on to a worker cluster without issue, so this clearly isn't enforced for "fake" pods. The question I had is the "No more than 150,000 total pods" limit. Does anybody know if this will be enforced if we have over that limit of fake pods in our master cluster, distributed across a large number of worker clusters?

Thanks again!

@frisso
Copy link
Member

frisso commented Jun 16, 2023

@tom-asmblr The limits we mentioned come from the general practice in Kubernetes. Since Liqo still relies on Kubernetes, most of its limits are still limits in Liqo. Hence, we do not enforce any limitations in Liqo, it's just something that comes from general experience in using Kubernetes.
Some limitations of K8s, though, do not apply. For instance, the limitations of 110 pods per node (since we've a virtual node). And (in part) the max number of nodes, since all the remote nodes are hidden by a single virtual node.
But the K8s limitations on the number of pods is still there.
Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants