New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
node autoscaling not scaling #324
Comments
What is the autoscaler logging on startup? It should look something like this:
The last line there is showing you that it has found an ASG to use. Also this message |
@max-rocket-internet looks like I'll keep trying to figure it out - thx |
@max-rocket-internet im very stuck - out of ideas. CA is not working because pods are not scheduling. I noticed after redeploying cluster from scratch with i increase min number of nodes to 2, and was able to successfull install all apps. I repeated this until the pods filled up but never scaled. the CA logs give the following -
|
GOT IT! I think i found a bug!! the tag is created wrong in the ASG.... |
OK cool but how is your tag set like that? I am using module version
And the tag on the ASG is |
I'm using |
v2.3.1 is not working either. here is the module ran, the output and aws console screenshot. module
auto-scaling output - tags.2.key are wrong
|
You are passing the string |
Thanks @dpiddockcmp - that was my problem. i failed to catch that. I went back to the documentation and it clearly states that. I really appreciate all your help. |
We've all made these mistakes. Glad you got it sorted 🙂 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
I'm submitting a...
What is the current behavior?
node autoscaling does not scale any nodes
If this is a bug, how to reproduce? Please include a code sample if relevant.
created clusterrolebindings
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
installed cluster-autoscaling via helm
cluster autoscaling logs -
What's the expected behavior?
autoscaling should work - https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/autoscaling.md
Are you able to fix this problem and submit a PR? Link here if you have already. nope.
Environment details
terraform-aws-modules/eks/aws 2.2.1
aws eks ec2 worker node
Terraform v0.11.13
Any other relevant info
I noticed there is no
k8s.io/cluster-autoscaler/enabled
tag created on the ec2 worker nodes. I tried adding it manually and restarting cluster-autoscaling pod - did not workThe text was updated successfully, but these errors were encountered: