-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EKS update-config overwrites user details when using multiple roles against the same cluster #4079
Comments
@jkpl - Thank you for your post. There will be two user with distinct role ARNs if you update two distinct cluster. But in this case as you are updating only one cluster the role is getting changed with the new role and user is overwritten. |
This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have or find the answers we need so that we can investigate further. |
Hi! Sorry for the late response. Yes, that seems to be the case: updating the config with two different clusters would create distinct contexts, but updating the config with a different role updates an existing context. I can see how that would be useful, if you want to edit cluster config. Any chance the command could also support adding multiple contexts to the config for the same cluster? |
Hi team, I also experienced something like this, steps for the replication: I create my
I manually update it to use my --role-arn to allow my other users to the cluster.
If I run the
|
Hey Hugo! Any reason why you manually update it instead of using the
It seems that you need to always provide the role ARN to |
Maybe to give this some more priority. We are experiencing the same issue for our setup. We have got two clusters, prod & test, divided into multiple namespaces and permissions setup per namespace using IAM role bindings. So when one of our developers needs access to two namespaces on the same cluster, they have to run the setup command each time before accessing it because the username is based on the cluster name. Updating the settings "updates" the rolearn. This doesn't feel correct. |
@yourilefers I agree ... our organization is attempting to structure our RBAC in a similar way and running directly into this problem... |
We are also experiencing this issue, I don't why Cluser-ARN is been using as user name so that we can't set the different user name for different context. |
In case it helps anyone else, we stopped using update-config and instead are updating our user's kube config files with the following commands (per cluster per role):
Needless to say, that was a LOT of steps to take. Apologies if the above code doesn't quite work for you as I had to generalize the code we have internally to post the details publicly (i.e. some variable references might be inconsistent due to human editing without running the code). The above assumes the users have their AWS profiles set locally through one of the mechanisms available to do that. |
@kdaily fyi |
@dgomesbr @sarmad-abualkaz - I contacted the EKS team to check in on this and the associated PR. Thanks! |
Hi @kdaily, any updates from the EKS team on this issue? |
@sarmad-abualkaz - sorry, no update. |
IMO, this behavior is extremely surprising in the first place and really warrants being considered a bug rather than a feature request. I appreciate that there are some additional features that would make handling best "principle of least privilege" patterns against a cluster, fundamentally the behavior here is surprising and likely causes unexpected consequences for those of us attempting to do best RBAC practices. |
@kdaily It looks like a good fit to me. That's effectively what I've done when I manually reconciled the bug in a kubeconfig. |
@kdaily any update
PR is open for more that year 😿 |
As a temporary solution I am using
|
@rafops Got me across the finish line, but a functional workaround that I have implemented for now is separate kubeconfigs Example: aws eks update-kubeconfig --name $cluster --profile $profile --region $region --alias ${cluster##*/}-SSO --kubeconfig ~/.kube/ssokubeconfig
yq '.users[].name += "-SSO"' -i ~/.kube/ssokubeconfig
yq '.contexts[].context.user += "-SSO"' -i ~/.kube/ssokubeconfig
export KUBECONFIG="${HOME}/.kube/config:${HOME}/.kube/ssokubeconfig" |
Ran into exactly the same issue right now. Hard to believe this issue is open for this many years without a fix? What is required to move this issue forward and get a solution implemented to accordingly use the alias as well for the users? Looking at the codebase it should be just a matter of using the --alias flag also for the user mapping with a context. A practical usecase is where you have various IAM roles that give access to various specific namespaces in a cluster. aws eks update-kubeconfig --name my-cluster --region us-east-1 \
--role-arn arn:aws:iam::123456789012:role/my-cluster-eks-app-a-dev \
--alias my-cluster-dev-A
aws eks update-kubeconfig --name my-cluster --region us-east-1 \
--role-arn arn:aws:iam::123456789012:role/my-cluster-eks-app-b-dev \
--alias my-cluster-dev-B This allows me to switch between the contexts to access and manage applications deployed in various namespaces. kubectl config use-context my-cluster-dev-A
kubectl -n namspace-x get pods
kubectl -n namspace-y get pods
kubectl config use-context my-cluster-dev-B
kubectl -n namspace-z get pods Currently the last |
Can someone please label this as a Bug? This is no feature request, |
Kindly note that this issue has been fixed with the new |
Feels like a weird way of resolving the bug in the aws eks update-kubeconfig --name cluster-a --region us-east-1 \
--alias cluster-a-admin --user-alias cluster-a-admin \
--role-arn arn:aws:iam::123456789012:role/cluster-a-eks-admins-access
aws eks update-kubeconfig --name cluster-a --region us-east-1 \
--alias cluster-a-dev --user-alias cluster-a-dev \
--role-arn arn:aws:iam::123456789012:role/cluster-a-eks-developers-access
aws eks update-kubeconfig --name cluster-a --region us-east-1 \
--alias cluster-a-auditor --user-alias cluster-a-auditor \
--role-arn arn:aws:iam::123456789012:role/cluster-a-eks-auditors-access
aws eks update-kubeconfig --name cluster-b --region eu-west-1 \
--alias cluster-b-admin --user-alias cluster-b-admin \
--role-arn arn:aws:iam::123456789012:role/cluster-b-eks-admins-access
aws eks update-kubeconfig --name cluster-b --region eu-west-1 \
--alias cluster-b-dev --user-alias cluster-b-dev \
--role-arn arn:aws:iam::123456789012:role/cluster-b-eks-developers-access
aws eks update-kubeconfig --name cluster-b --region eu-west-1 \
--alias cluster-b-auditor --user-alias cluster-b-auditor \
--role-arn arn:aws:iam::123456789012:role/cluster-b-eks-auditors-access Why this doesn't make sense? Because you can't reuse the same username with different contexts like this. aws eks update-kubeconfig --name cluster-a --region us-east-1 \
--alias cluster-a-admin --user-alias admin \
--role-arn arn:aws:iam::123456789012:role/cluster-a-eks-admins-access
aws eks update-kubeconfig --name cluster-b --region eu-west-1 \
--alias cluster-b-admin --user-alias admin \
--role-arn arn:aws:iam::123456789012:role/cluster-b-eks-admins-access This would result in the admin user for cluster-a getting overwrited. 🤷♂️ Maybe it is also good if this issue would be marked as a The real BUG Fix Basically what should happen is the following: Simplify the CLI to just the following… Like it was originally… aws eks update-kubeconfig --name cluster-a --region us-east-1 \
- --alias cluster-a-admin --user-alias cluster-a-admin \
+ --alias cluster-a-admin
--role-arn arn:aws:iam::123456789012:role/cluster-a-eks-admins-access Fix the BUG on the users:
- - name: arn:aws:eks:us-east-1:123456789012:cluster/cluster-a
+ - name: cluster-a-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- eu-west-1
- eks
- get-token
- --cluster-name
- aft-249f0a2-cluster
- --role
- arn:aws:iam::123456789012:role/cluster-a-eks-admins-access
command: aws
interactiveMode: IfAvailable
provideClusterInfo: false
contexts:
- context:
cluster: arn:aws:eks:us-east-1:123456789012:cluster/cluster-a
- user: arn:aws:eks:us-east-1:123456789012:cluster/cluster-a
+ user: cluster-a-admin
name: cluster-a-admin In essence the original bug is still there and the new feature will just introduce new issues if not used exactly as explained above. And if used as above, it just reduces the user experience as you would have to duplicate the value in 2 parameters. |
I have an EKS cluster where I manage user access using IAM roles. This is roughly what my map roles would look like:
To use
kubectl
with EKS, I need to assume the right role. Therefore, I'll have to provide the role ARN when updating the kubeconfig:So far so good. However, when I try to add my second role to the same kubeconfig...
...the user information for the previous config update is overwritten. You can see this in the kubeconfig file:
Both contexts remain in the config file, but there's only one user which is attached to both contexts. There should be two users with distinct role ARNs.
I haven't verified this from the code, but I'm guessing the user is overwritten because the same user name is used for both updates.
The text was updated successfully, but these errors were encountered: