Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to tune the number of typha replicas #1295

Closed
bodgit opened this issue May 10, 2021 · 10 comments
Closed

Ability to tune the number of typha replicas #1295

bodgit opened this issue May 10, 2021 · 10 comments

Comments

@bodgit
Copy link

bodgit commented May 10, 2021

Expected Behavior

I want to provision a reasonable number of small test EKS clusters that don't necessarily require a large number of nodes but do need to be fully functional, i.e. calico, cluster-autoscaler, etc. The typha deployment seems to be set to 3 replicas which means each cluster has a minimum of 3 nodes regardless of how utilised they are.

Current Behavior

The typha deployment is set for 3 replicas which means as soon as it's deployed, the autoscaler kicks in and increases the nodes to 3 to satisfy the deployment and will never scale down again. I can edit the deployment once deployed but it seems the change is overwritten again within a few minutes. Do I need 3 for consensus reasons or can I get away with fewer?

Possible Solution

I've made use of the registry and imagePullSecrets fields on the Installation resource as my EKS clusters are entirely private, (thanks for those!), would adding another field here be a potential solution?

Your Environment

  • Operating System and version: Amazon EKS
@Omar007
Copy link

Omar007 commented May 10, 2021

Maybe it'd not only be interesting to tune the replica count, but enable the Installation CRD to also completely disable the use of Typha.
Calico currently has a lot of different ways to cover the installation/deployment (operator by manifest, operator by Helm, Calico by manifest variant A, Calico by manifest variant B, etc). Some of these even end up overlapping with the end-result.
Providing different solutions isn't inherently bad but atm I wouldn't say it is obvious or logically to set up or decide on a deployment strategy to maintain Calico with as not each of them currently covers all the documented deployment cases either.

Adding this capability would make the operator an alternative for the on-prem 50 nodes or less deployment variant that currently relies on a separate manifest. This installation variant end-result is currently not covered in the operator and adding this option would make it possible to use the operator for basically all documented deployment variants.
This would mean you're only required to provide and manage different Installation objects for your clusters instead of also having to manage completely different installation methods.

@tmjd
Copy link
Member

tmjd commented May 10, 2021

@bodgit The operator will auto scale typha so if only 1 or 2 nodes are present the appropriate number of typhas will be used. Also if you deploy a large number of nodes, typha will be scaled up as necessary by the operator. Typha will not be scaled to less than 3 if there are at least 3 nodes available as we consider 3 to be the minimum for a high availability purposes.
If you are looking for typha to scale to less than 3 after scaling up a cluster to 3 or more, then yes that is something the operator will not do.

@Omar007 The operator is deploying what we believe to be best practice, and that includes typha with all installation sizes. The operator is a great option for the on-prem 50 node or less deployment. The recommendation that typha should be deployed on clusters with more than 50 nodes should not be taken as a recommendation that it should not be deployed with less than 50 nodes.

@Omar007
Copy link

Omar007 commented May 13, 2021

@tmjd Ok fair enough. Does that then mean that part of the documentation is basically outdated/deprecated?
The way it's set up and explicitly split into 2 use-cases suggested to me that the deployment variant without typha was preferred over the other with <50 nodes.

Small sidenote; It looks like the scale count doesn't care about the node type and will operate on both master/control-plane nodes as well as worker nodes. Is that something that should be accounted for?

@bodgit
Copy link
Author

bodgit commented May 14, 2021

@tmjd Thanks for the explanation. Is the scaling algorithm documented somewhere, i.e. how many nodes do I need to have before a fourth typha replica is added, etc.?

@tmjd
Copy link
Member

tmjd commented May 14, 2021

@Omar007 I don't think the documentation is outdated. It is perfectly fine to deploy without typha for fewer than 50 pods.

It looks like the scale count doesn't care about the node type and will operate on both master/control-plane nodes as well as worker nodes. Is that something that should be accounted for?

Any host that will be running calico/node should be counted for the purposes for typha scaling.

@bodgit The only documentation is in the code 😀,

// typhaAutoscaler periodically lists the nodes and, if needed, scales the Typha deployment up/down.
with the actual implementation here
func GetExpectedTyphaScale(nodes int) int {

@Omar007
Copy link

Omar007 commented May 14, 2021

@tmjd Maybe that was a bit strongly worded. What I meant was more along the lines of: unlike how it is presented in the documentation, it's not the preferred/recommended deployment method (anymore?).
To me the Calico documentation is currently very much implying it is the first/primary option you should use if you're running a cluster with <50 nodes. It being the first option shown as well as being present as an explicit use-case in the first place.
Basically, the more exact the documentation matches the desired use-case, the more it suggests that for that given use-case that deployment method is the preferred and best option available. Even more so since it doesn't explicitly say anywhere it isn't.

As such, when I looked at using Calico based purely on the docs (layout, wording, etc), it suggests to me that:

  1. <50 nodes -> preferred to deploy minimally without typha
  2. >50 nodes -> preferred to deploy with typha
  3. Instead of using loose manifests, use of the operator preferred if you can

And when trying to use the operator for the <50 nodes use-case as seemingly documented as being preferred I came upon this issue while trying to figure out how to deploy without typha to match that documented use-case.

In reality it seems/feels like it's more like:

  1. Deployments with typha are the preferred method and regarded as best practice.
  2. Use of the operator is preferred if possible.
  3. Any other use-case; possible but you probably just shouldn't bother

Which is fine by me, just not at all clear to me until you elaborated here ;)

This information here helps a lot by deciding on the deployment strategy because it basically just flattens the whole selection of options to just 1 for me; use the operator which will deploy Calico with typha, which is the preferred and best practice deployment.

@bodgit
Copy link
Author

bodgit commented May 14, 2021

@tmjd A pointer to the code is fine, that comment block explains it perfectly. I see I will have to get into the realms of 200+ nodes before a fourth typha Pod appears, I think that will be a nice problem to have!

@itmustbejj
Copy link

itmustbejj commented Jun 4, 2021

I ran into this issue today. When using cluster-autoscaler, once you scale to a number of nodes that increases the typha replicas, the cluster autoscaler will never be able to scale nodes back down to a smaller size that would decrease the number of typha replicas. For example here the cluster-autoscaler is unable to evict typha pods to scale below 3 nodes:

I0603 20:49:59.394882       1 klogx.go:86] Evaluation ip-10-xx-xx-xx.region.compute.internal for calico-system/calico-typha-5557f5df96-96l2x -> node(s) didn't have free ports for the requested pod ports; predicateName=NodePorts; reasons: node(s) didn't have free ports for the requested pod ports; debugInfo=
I0603 20:49:59.394913       1 klogx.go:86] Evaluation ip-10-xx-xx-xy.region.compute.internal for calico-system/calico-typha-5557f5df96-96l2x -> node(s) didn't have free ports for the requested pod ports; predicateName=NodePorts; reasons: node(s) didn't have free ports for the requested pod ports; debugInfo=
I0603 20:49:59.394928       1 cluster.go:190] Fast evaluation: node ip-10-xx-xx-xz.region.compute.internal is not suitable for removal: failed to find place for calico-system/calico-typha-5557f5df96-96l2x

I believe this would work if the tigera-operator supported the annotation cluster-autoscaler.kubernetes.io/safe-to-evict": "true for calico-typha like the calico helm chart does, but this annotation isn't configurable in the operator, and is explicitly stripped off during migration by tigera-operator (probably for a good reason).

I can understand why running > 2 typha replicas is recommended for HA during updates, but in smaller non-prod clusters, this can be undesirable.

@tmjd
Copy link
Member

tmjd commented Jun 4, 2021

@itmustbejj Can I suggest opening a new issue as I think adding safe-to-evict is probably a request that seems to make sense. Though from the log messages you added I'm not confident it would evict the pod, even if it had that annotation. The message says not suitable for removal: failed to find place for calico-system/calico-typha, which to me suggests that it does think it could be evicted but there is no place for it to go so it wouldn't evict it anyway.

@tmjd
Copy link
Member

tmjd commented Jun 4, 2021

I'm going to close this Issue as the original request is not something we're going to expose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants