-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS Cluster Autoscaler Permissions #113
Comments
Judging by the code from https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/aws_manager.go#L114 it looks like you've passed an incorrect group name. |
@pluttrell Was it a problem with the group name? |
Nope, the group names were identical to what was in AWS. We do however have the aws-cluster-autoscaler working perfectly with just using the kubernetes resource files directly without helm, so we've gone with that option for now. |
Great :). Closing the bug. |
Getting a similar error, with kops 1.7.0, kubernetes 1.7.5, cluster-autoscaler 0.6.1, but only when trying to scale from 0 nodes. According to this, as of CA 0.6.1 I should be able to scale to/from 0. I'm getting errors like this:
Using a deployment similar to this one, and it works as long as there is at least 1 node up:
|
Figured this out, it was the fact that a kube-dns pod was not running on the master node. To run it, had to add the master toleration to the kube-dns deployment (same as with cluster-autoscaler deployment above). Once kube-dns was running on the master, autoscaler was able to use it to get ASG info from AWS and scale up from 0 nodes. |
Curious does cluster-autoscaler depend on in-cluster DNS service? Probably not? Instead of putting kube-dns on master, what about setting Using |
@MrHohn @7chenko @StevenACoffman i have tried
Still im getting this error Failed to update node registry: RequestError: send request failed Please suggest |
This looks like a routing or firewall issue instead.. |
I'm getting the original error posted What steps can I take to debug and fix this? |
I think that I have the correct AWS permissions to describe the autoscaling groups If I exec into the cluster-autoscaler pod and install the
|
Briefly looking at the code, it seems that AWS returns no groups with this name. Based on the error message, method is called with correct group name. I'm unable to replicate or debug it, but I guess if you get different results for requests made by Go library and command line tool, maintainers of those tools may be better able to help. |
@srossross-tableau can you confirm that the original request is including the You might need to make sure your env:
- name: AWS_REGION
value: us-west-2 |
Thanks @christopherhein that was the issue. |
I tested this and feel this is the best approach. It keeps you from having to modify the kube-dns deployment while keeping your masters clean. Thanks!! |
…-differences UPSTREAM: <carry>: openshift: add custom nodeset comparator
Setting |
I met the same error on EKS 1.13, you helped me a lot, Thank you very much @gazal-k |
Using v0.5.4 of the aws-cluster-autoscaler, we're getting this error:
It sure looks like a permission problem... But per the instructions, I have the following policy on my instance role named
nodes.dev.clusters.mydomain.io
:Without this addition, I get a different error:
So we're thinking that we have the necessary permissions.
For reference here's our execution config:
Any ideas on what to do?
Is there any strategy for debugging this?
The text was updated successfully, but these errors were encountered: