You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thanks again for liqo, it's working fantastically for us so far.
We've implemented a 'hub and spoke' style cluster setup, where we have one main cluster and a set of worker clusters to pick up the pods. That means all our 'top level' definitions (deployments etc) live in our main cluster, while all the real pods are distributed through the various regions.
My question pertains to individual cluster limits. (see https://kubernetes.io/docs/setup/best-practices/cluster-large) - Obviously the main cluster sees the worker clusters as 'nodes' and thinks (as far as I can tell) that the "fake" pods running in the main cluster are real. We've already tested the "No more than 110 pods per node" limit by scheduling a thousand or so pods on to a worker cluster without issue, so this clearly isn't enforced for "fake" pods. The question I had is the "No more than 150,000 total pods" limit. Does anybody know if this will be enforced if we have over that limit of fake pods in our master cluster, distributed across a large number of worker clusters?
Thanks again!
The text was updated successfully, but these errors were encountered:
@tom-asmblr The limits we mentioned come from the general practice in Kubernetes. Since Liqo still relies on Kubernetes, most of its limits are still limits in Liqo. Hence, we do not enforce any limitations in Liqo, it's just something that comes from general experience in using Kubernetes.
Some limitations of K8s, though, do not apply. For instance, the limitations of 110 pods per node (since we've a virtual node). And (in part) the max number of nodes, since all the remote nodes are hidden by a single virtual node.
But the K8s limitations on the number of pods is still there.
Hope this helps.
Hello, thanks again for liqo, it's working fantastically for us so far.
We've implemented a 'hub and spoke' style cluster setup, where we have one main cluster and a set of worker clusters to pick up the pods. That means all our 'top level' definitions (deployments etc) live in our main cluster, while all the real pods are distributed through the various regions.
My question pertains to individual cluster limits. (see https://kubernetes.io/docs/setup/best-practices/cluster-large) - Obviously the main cluster sees the worker clusters as 'nodes' and thinks (as far as I can tell) that the "fake" pods running in the main cluster are real. We've already tested the "No more than 110 pods per node" limit by scheduling a thousand or so pods on to a worker cluster without issue, so this clearly isn't enforced for "fake" pods. The question I had is the "No more than 150,000 total pods" limit. Does anybody know if this will be enforced if we have over that limit of fake pods in our master cluster, distributed across a large number of worker clusters?
Thanks again!
The text was updated successfully, but these errors were encountered: