-
Notifications
You must be signed in to change notification settings - Fork 1.5k
[Bug] Degraded managedNodeGroups when using a pathed instanceRoleARN #7846
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Removing the AWS support provided some steps for their reproduction of the issue: Step 1 => I created a trust policy with the below mentioned content:
Step 2 => I created a role with path using the below mentioned command:
Step 3 => I created an EKS cluster and nodegroup with the below mentioned config file "eksctl create cluster -f test.yaml" :
Step 4 => The nodegroup that craeted shows IAM role as arn:aws:iam::55555555555:role/test-node-role" on the EKS console. The access entry that is created automatically has the complete path "/eks/" included but it is stripped from the node group. The CreateNodegroup API call and Cloudformation stack show below mentioned configuration for node role passed:
So, eksctl seems to be stripping the path from the node role which is eventually leading to health issues on the node with the error "access entry not found in cluster". |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Bump for stalebot |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Bump for stalebot |
What were you trying to accomplish?
We launch EKS clusters using instanceRoleARN to attach managed policies (
AmazonEKSWorkerNodePolicy
,AmazonEKS_CNI_Policy
,AmazonEC2ContainerRegistryReadOnly
) to our node group instances.We provided a
path
on these roles of"/eks/"
for organizational purposes. We'd like to be able to manage these node groups, but the pathing seems to cause a degradation in node group health.What happened?
The cluster creates as expected but after about an hour or so the node group shows up as degraded
It's a little tough to tell with the redactions, but the ARN shown in the "Affected resources" column lacks the
/eks/
path prefix.Removing the path parameter from the role seems to avoid the issue.
How to reproduce it?
We use a eksctl config template like this:
Where the instance role ARN is
"arn:aws:iam::ACCOUNT:role/eks/ROLE_NAME"
Logs
Output from eksctl during creation is normal.
Anything else we need to know?
What OS are you using? macos
Are you using a downloaded binary or did you compile eksctl? downloaded via asdf
What type of AWS credentials are you using (i.e. default/named profile, MFA)? SSO
Versions
The text was updated successfully, but these errors were encountered: