-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster autoscaler addon permission problem #3871
Comments
What IAM perms did you add? |
/cc @KashifSaadat |
@arun-gupta you got an IAM expert that can comment? Are limiting the master role for the autoscaler to only have access to ASG's that are tagged with KubernetesCluster, and this is not working. |
If someone wants to comment out this code https://github.com/kubernetes/kops/blob/master/pkg/model/iam/iam_builder.go#L664 and just use the regular resource. Resource: resource, That would fix it. I do not have autoscaler setup, so testing would be long for me. |
@chrislovecnm I added the I added it to the policy |
/area security Can you PR? I will review and git it into the 1.8 kops release |
Hi @ftoresan, thanks for raising an issue! According to the AWS Auto Scaling IAM Documentation, we should actually be using Please would you be able to test this case and raise a PR? The relevant locations to update code (and tests) are:
|
Ok @KashifSaadat , I'll make the change and create the PR, thanks for the reply! |
That would be great, thank you! Let me know if you have any questions. :) |
Automatic merge from submit-queue. Changing the prefix of the ResourceTag condition The prefix was `ec2` and it was not working, changing it to `autoscaling` should do the trick. This should fix #3871
What kops version are you running? use
kops version
1.8.0-beta.1
What Kubernetes version are you running? use
kubectl version
1.8.3
What cloud provider are you using?
AWS
What commands did you execute (Please provide cluster manifest
kops get --name my.example.com
, if available) and what happened after commands executed?Installed the cluster add-on while having insufficient CPU on current nodes.
What you expected to happen:
A new node to be created.
How can we to reproduce it (as minimally and precisely as possible):
Create a cluster with the default instance group having min nodes = 2 and max nodes = 6. Apply same configuration to the autoscaler addon while having an insufficient CPU to schedule current pods.
The instance group applied the desired number to the ASG correcty, and the addon failed to do so. In its log an error was logged saying that the EC2 role had insufficient permission to set the desired capacity.
My workaround: I added the permission in the IAM role without the condition that checks if the "KubernetesCluster" tag with the cluster name is present. It seems that the auto scaler (maybe because it is running in a pod?) does not match the condition, while the instance group does.
The text was updated successfully, but these errors were encountered: