Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS update-config overwrites user details when using multiple roles against the same cluster #4079

Open
jpallari opened this issue Apr 16, 2019 · 23 comments
Labels
customization Issues related to CLI customizations (located in /awscli/customizations) eks-kubeconfig feature-request A feature should be added or improved. has-pr This issue has a PR associated with it. p2 This is a standard priority issue

Comments

@jpallari
Copy link

I have an EKS cluster where I manage user access using IAM roles. This is roughly what my map roles would look like:

- rolearn: arn:aws:iam::123456789123:role/k8s-admin
  username: admin
  groups:
    - system:masters
- rolearn: arn:aws:iam::123456789123:role/k8s-developer
  username: developer
  groups:
    - developers

To use kubectl with EKS, I need to assume the right role. Therefore, I'll have to provide the role ARN when updating the kubeconfig:

aws eks update-kubeconfig --name mycluster --role-arn arn:aws:iam::123456789123:role/k8s-admin --alias mycluster-admin

So far so good. However, when I try to add my second role to the same kubeconfig...

aws eks update-kubeconfig --name mycluster --role-arn arn:aws:iam::123456789123:role/k8s-developer --alias mycluster-developer

...the user information for the previous config update is overwritten. You can see this in the kubeconfig file:

apiVersion: v1
clusters;
- cluster:
    certificate-authority-data: <base64>
    server: https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.YYY.us-east-1.eks.amazonaws.com
  name: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
    user: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
  name: mycluster-admin
- context:
    cluster: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
    user: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
  name: mycluster-developer
current-context: mycluster-developer
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - eks
      - -r
      - arn:aws:iam::123456789123:role/k8s-developer
      command: aws-iam-authenticator
      env: null

Both contexts remain in the config file, but there's only one user which is attached to both contexts. There should be two users with distinct role ARNs.

I haven't verified this from the code, but I'm guessing the user is overwritten because the same user name is used for both updates.

@swetashre swetashre self-assigned this Apr 16, 2019
@swetashre
Copy link

swetashre commented Apr 17, 2019

@jkpl - Thank you for your post. There will be two user with distinct role ARNs if you update two distinct cluster. But in this case as you are updating only one cluster the role is getting changed with the new role and user is overwritten.

@swetashre swetashre added guidance Question that needs advice or information. eks-kubeconfig closing-soon This issue will automatically close in 4 days unless further comments are made. labels Apr 18, 2019
@no-response
Copy link

no-response bot commented Apr 25, 2019

This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have or find the answers we need so that we can investigate further.

@no-response no-response bot closed this as completed Apr 25, 2019
@jpallari
Copy link
Author

Hi! Sorry for the late response. Yes, that seems to be the case: updating the config with two different clusters would create distinct contexts, but updating the config with a different role updates an existing context. I can see how that would be useful, if you want to edit cluster config.

Any chance the command could also support adding multiple contexts to the config for the same cluster?

@no-response no-response bot removed the closing-soon This issue will automatically close in 4 days unless further comments are made. label Apr 25, 2019
@no-response no-response bot reopened this Apr 25, 2019
@hugoprudente
Copy link

Hi team,

I also experienced something like this, steps for the replication:

I create my .kube/config using

aws eks update-kubeconfig --name hugoprudente-cluster-12

I manually update it to use my --role-arn to allow my other users to the cluster.

- name: arn:aws:eks:eu-west-1:004815162342:cluster/hugoprudente-cluster-12
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-west-1
      - eks
      - get-token
      - --cluster-name
      - hugoprudente-cluster-12
      - --role-arn
      - arn:aws:iam::004815162342:role/Admin
      command: aws

If I run the aws eks update-kubeconfig --name hugoprudente-cluster-12 again, the --role-arn get removed.

- name: arn:aws:eks:eu-west-1:004815162342:cluster/hugoprudente-cluster-12
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-west-1
      - eks
      - get-token
      - --cluster-name
      - hugoprudente-cluster-12
      command: aws

@jpallari
Copy link
Author

Hey Hugo! Any reason why you manually update it instead of using the --role-arn flag with update-kubeconfig?

aws eks update-kubeconfig --name hugoprudente-cluster-12 --role-arn arn:aws:iam::004815162342:role/Admin

It seems that you need to always provide the role ARN to update-kubeconfig or it will be removed from the existing config.

@yourilefers
Copy link

Maybe to give this some more priority. We are experiencing the same issue for our setup. We have got two clusters, prod & test, divided into multiple namespaces and permissions setup per namespace using IAM role bindings. So when one of our developers needs access to two namespaces on the same cluster, they have to run the setup command each time before accessing it because the username is based on the cluster name. Updating the settings "updates" the rolearn. This doesn't feel correct.

@diclophis
Copy link

@yourilefers I agree ... our organization is attempting to structure our RBAC in a similar way and running directly into this problem...

@huyu
Copy link

huyu commented May 4, 2020

We are also experiencing this issue, I don't why Cluser-ARN is been using as user name so that we can't set the different user name for different context.

@ausmith
Copy link

ausmith commented Jun 23, 2020

In case it helps anyone else, we stopped using update-config and instead are updating our user's kube config files with the following commands (per cluster per role):

# Set the cluster object in the kube config.
kubectl config set-cluster "${cluster_arn}" --server="${cluster_domain}"

# Add the certificate data for the indicated cluster, data pulled from the `aws eks describe-cluster` output.
kubectl config set "clusters.${cluster_arn}.certificate-authority-data" "${cluster_cert_data}"

# Set the user's credentials.
kubectl config set-credentials "${desired_context_alias}" --exec-command=aws --exec-arg=--region --exec-arg="${cluster_region}" --exec-arg=--profile --exec-arg="${role_profile_name}" --exec-arg=eks --exec-arg=get-token --exec-arg=--cluster-name --exec-arg=${cluster_name}" --exec-api-version=client.authentication.k8s.io/v1alpha1

# Set the context that combines the cluster information and the credentials set in the above commands.
# The namespace is 100% optional.
kubectl config set-context "${desired_context_alias}" --cluster="${cluster_arn}" --namespace="${desired_role_namespace}" --user="${desired_context_alias}"

Needless to say, that was a LOT of steps to take. Apologies if the above code doesn't quite work for you as I had to generalize the code we have internally to post the details publicly (i.e. some variable references might be inconsistent due to human editing without running the code). The above assumes the users have their AWS profiles set locally through one of the mechanisms available to do that.

@dgomesbr
Copy link

@kdaily fyi

@kdaily kdaily added feature-request A feature should be added or improved. service-api This issue is due to a problem in a service API, not the SDK implementation. and removed guidance Question that needs advice or information. labels Jul 29, 2020
@kdaily
Copy link
Member

kdaily commented Jul 29, 2020

@dgomesbr @sarmad-abualkaz - I contacted the EKS team to check in on this and the associated PR. Thanks!

@kdaily kdaily assigned kdaily and unassigned swetashre Jul 30, 2020
@sarmad-abualkaz
Copy link

@dgomesbr @sarmad-abualkaz - I contacted the EKS team to check in on this and the associated PR. Thanks!

Hi @kdaily, any updates from the EKS team on this issue?

@kdaily
Copy link
Member

kdaily commented Sep 23, 2020

@sarmad-abualkaz - sorry, no update.

@kdaily kdaily removed their assignment Oct 1, 2020
@ryanmt
Copy link

ryanmt commented Oct 20, 2020

IMO, this behavior is extremely surprising in the first place and really warrants being considered a bug rather than a feature request. I appreciate that there are some additional features that would make handling best "principle of least privilege" patterns against a cluster, fundamentally the behavior here is surprising and likely causes unexpected consequences for those of us attempting to do best RBAC practices.

@kdaily kdaily added the investigating This issue is being investigated and/or work is in progress to resolve the issue. label Oct 20, 2020
@kdaily kdaily self-assigned this Oct 20, 2020
@kdaily
Copy link
Member

kdaily commented Oct 20, 2020

Hi @ryanmt, thanks for the feedback. Does the proposed solution in #5413 work as you would expect?

@ryanmt
Copy link

ryanmt commented Oct 20, 2020

@kdaily It looks like a good fit to me. That's effectively what I've done when I manually reconciled the bug in a kubeconfig.

@kdaily kdaily removed the investigating This issue is being investigated and/or work is in progress to resolve the issue. label Oct 21, 2020
@kdaily kdaily removed their assignment Oct 21, 2020
@kdaily kdaily added the customization Issues related to CLI customizations (located in /awscli/customizations) label Nov 12, 2020
@splichy
Copy link

splichy commented Aug 3, 2021

@kdaily any update

@dgomesbr @sarmad-abualkaz - I contacted the EKS team to check in on this and the associated PR. Thanks!

PR is open for more that year 😿

@rafops
Copy link

rafops commented Aug 23, 2022

As a temporary solution I am using yq to rename the user and merging kubeconfigs:

aws eks update-kubeconfig --name $CLUSTER_NAME --alias $ROLE_NAME --kubeconfig kubeconfig.tmp

yq ".users[0].name=\"$ROLE_NAME\" | .contexts[0].context.user=\"$ROLE_NAME\"" < kubeconfig.tmp > kubeconfig.tmp2

KUBECONFIG="kubeconfig.tmp2:$KUBECONFIG" kubectl config view --flatten

@a-thomas-22
Copy link

@rafops Got me across the finish line, but a functional workaround that I have implemented for now is separate kubeconfigs

Example:

aws eks update-kubeconfig --name $cluster --profile $profile --region $region --alias ${cluster##*/}-SSO --kubeconfig ~/.kube/ssokubeconfig
yq '.users[].name += "-SSO"' -i ~/.kube/ssokubeconfig
yq  '.contexts[].context.user += "-SSO"' -i ~/.kube/ssokubeconfig
export KUBECONFIG="${HOME}/.kube/config:${HOME}/.kube/ssokubeconfig"

@tim-finnigan tim-finnigan added has-pr This issue has a PR associated with it. p2 This is a standard priority issue and removed service-api This issue is due to a problem in a service API, not the SDK implementation. labels Nov 14, 2022
@marcofranssen
Copy link

marcofranssen commented Jan 4, 2023

Ran into exactly the same issue right now. Hard to believe this issue is open for this many years without a fix? What is required to move this issue forward and get a solution implemented to accordingly use the alias as well for the users?

Looking at the codebase it should be just a matter of using the --alias flag also for the user mapping with a context.

A practical usecase is where you have various IAM roles that give access to various specific namespaces in a cluster.

aws eks update-kubeconfig --name my-cluster --region us-east-1 \
  --role-arn arn:aws:iam::123456789012:role/my-cluster-eks-app-a-dev \
  --alias my-cluster-dev-A
aws eks update-kubeconfig --name my-cluster --region us-east-1 \
  --role-arn arn:aws:iam::123456789012:role/my-cluster-eks-app-b-dev \
  --alias my-cluster-dev-B

This allows me to switch between the contexts to access and manage applications deployed in various namespaces.

kubectl config use-context my-cluster-dev-A
kubectl -n namspace-x get pods
kubectl -n namspace-y get pods
kubectl config use-context my-cluster-dev-B
kubectl -n namspace-z get pods

Currently the last awk eks update-kubeconfig command invocation overwrites all the user mappings for the other contexts created earlier for this same cluster, meaning you can only configure a single user per cluster, no matter if you are using an alias.

@NicoForce
Copy link

Can someone please label this as a Bug? This is no feature request, aws eks update-kubeconfig simply doesn't work as it should.

@yuxiang-zhang
Copy link
Contributor

Kindly note that this issue has been fixed with the new --user-alias flag implemented in #5165 and available since v1.27.98 and v2.11.6.

@marcofranssen
Copy link

marcofranssen commented Mar 29, 2023

Kindly note that this issue has been fixed with the new --user-alias flag implemented in #5165 and available since v1.27.98 and v2.11.6.

Feels like a weird way of resolving the bug in the --alias flag. In the end I just have to duplicate the value for alias and user-alias twice.

aws eks update-kubeconfig --name cluster-a --region us-east-1 \
  --alias cluster-a-admin --user-alias cluster-a-admin \
  --role-arn arn:aws:iam::123456789012:role/cluster-a-eks-admins-access
aws eks update-kubeconfig --name cluster-a --region us-east-1 \
  --alias cluster-a-dev --user-alias cluster-a-dev \
  --role-arn arn:aws:iam::123456789012:role/cluster-a-eks-developers-access
aws eks update-kubeconfig --name cluster-a --region us-east-1 \
  --alias cluster-a-auditor --user-alias cluster-a-auditor \
  --role-arn arn:aws:iam::123456789012:role/cluster-a-eks-auditors-access

aws eks update-kubeconfig --name cluster-b --region eu-west-1 \
  --alias cluster-b-admin --user-alias cluster-b-admin \
  --role-arn arn:aws:iam::123456789012:role/cluster-b-eks-admins-access
aws eks update-kubeconfig --name cluster-b --region eu-west-1 \
  --alias cluster-b-dev --user-alias cluster-b-dev \
  --role-arn arn:aws:iam::123456789012:role/cluster-b-eks-developers-access
aws eks update-kubeconfig --name cluster-b --region eu-west-1 \
  --alias cluster-b-auditor --user-alias cluster-b-auditor \
  --role-arn arn:aws:iam::123456789012:role/cluster-b-eks-auditors-access

Why this doesn't make sense?

Because you can't reuse the same username with different contexts like this.

aws eks update-kubeconfig --name cluster-a --region us-east-1 \
  --alias cluster-a-admin --user-alias admin \
  --role-arn arn:aws:iam::123456789012:role/cluster-a-eks-admins-access
aws eks update-kubeconfig --name cluster-b --region eu-west-1 \
  --alias cluster-b-admin --user-alias admin \
  --role-arn arn:aws:iam::123456789012:role/cluster-b-eks-admins-access

This would result in the admin user for cluster-a getting overwrited.
So in essence there is no value in having another flag, while the bug could have been resolved on the original --alias flag.

🤷‍♂️

Maybe it is also good if this issue would be marked as a bug. This wasn't a feature request IMHO.

The real BUG Fix

Basically what should happen is the following:

Simplify the CLI to just the following… Like it was originally…

  aws eks update-kubeconfig --name cluster-a --region us-east-1 \
-   --alias cluster-a-admin --user-alias cluster-a-admin \
+   --alias cluster-a-admin
    --role-arn arn:aws:iam::123456789012:role/cluster-a-eks-admins-access

Fix the BUG on the --alias flag by handling that parameter also during the user config section. Then the bug is truly resolved, while maintaining proper user experience, and reducing the risk as mentioned before that your users will be overwritten if not used accordingly.

  users:
- - name: arn:aws:eks:us-east-1:123456789012:cluster/cluster-a
+ - name: cluster-a-admin
    user:
      exec:
        apiVersion: client.authentication.k8s.io/v1beta1
        args:
        - --region
        - eu-west-1
        - eks
        - get-token
        - --cluster-name
        - aft-249f0a2-cluster
        - --role
        - arn:aws:iam::123456789012:role/cluster-a-eks-admins-access
        command: aws
        interactiveMode: IfAvailable
        provideClusterInfo: false
  contexts:
  - context:
      cluster: arn:aws:eks:us-east-1:123456789012:cluster/cluster-a
-     user: arn:aws:eks:us-east-1:123456789012:cluster/cluster-a
+     user: cluster-a-admin
    name: cluster-a-admin

In essence the original bug is still there and the new feature will just introduce new issues if not used exactly as explained above. And if used as above, it just reduces the user experience as you would have to duplicate the value in 2 parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
customization Issues related to CLI customizations (located in /awscli/customizations) eks-kubeconfig feature-request A feature should be added or improved. has-pr This issue has a PR associated with it. p2 This is a standard priority issue
Projects
None yet
Development

No branches or pull requests