Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aws-eks] Can't log into fresh EKS cluster with SAML mastersRole #6982

Closed
dr3s opened this issue Mar 24, 2020 · 20 comments
Closed

[aws-eks] Can't log into fresh EKS cluster with SAML mastersRole #6982

dr3s opened this issue Mar 24, 2020 · 20 comments
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. effort/small Small work item – less than a day of effort p1 response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days.

Comments

@dr3s
Copy link

dr3s commented Mar 24, 2020

I used the CDK to create an EKS cluster with an assumed role and cannot login even though I made a role that I can assume the master role. Unlike #3752 I set the mastersRole.

I followed the example here:
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-eks-readme.html

Reproduction Steps

Initially I thought setting the mastersRole should be enough:

// admin role
const clusterAdmin = iam.Role.fromRoleArn(this, 'AdminRole',
     "arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team");

 const cluster = new eks.Cluster(this, 'KubeFlowCluster', {
      defaultCapacity: 3,
      defaultCapacityInstance: new ec2.InstanceType('t3.large'),
      mastersRole: clusterAdmin,
      vpc: vpc,
      vpcSubnets: [{ subnets: vpc.privateSubnets }],

    });

I thought that should also set up aws auth mapping in EKS but I have since added the following which also didn't help:

cluster.awsAuth.addMastersRole(clusterAdmin)

In fact this wasn't necessary and just added a duplicate master role entry but I wanted to illustrate what I tried.

Error Log

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get cluster
NAME REGION
KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 eu-west-1

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get iamidentitymapping --cluster KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3
Error: getting auth ConfigMap: Unauthorized

Environment

  • **CLI Version :1.27.0 (build a98c0b3)
  • **Framework Version:node v11.10.1
  • **OS :OS X
  • **Language :typescript

Other

This is the CF template section generated by CDK for the awsauth:


"KubeFlowClusterAwsAuthmanifest4ABE9919": {
      "Type": "Custom::AWSCDK-EKS-KubernetesResource",
      "Properties": {
        "ServiceToken": {
          "Fn::GetAtt": [
            "awscdkawseksKubectlProviderNestedStackawscdkawseksKubectlProviderNestedStackResourceA7AEBA6B",
            "Outputs.KubeflowEksDevawscdkawseksKubectlProviderframeworkonEventA20B6922Arn"
          ]
        },
        "Manifest": {
          "Fn::Join": [
            "",
            [
              "[{\"apiVersion\":\"v1\",\"kind\":\"ConfigMap\",\"metadata\":{\"name\":\"aws-auth\",\"namespace\":\"kube-system\"},\"data\":{\"mapRoles\":\"[{\\\"rolearn\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"username\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"groups\\\":[\\\"system:masters\\\"]},{\\\"rolearn\\\":\\\"",
              {
                "Fn::GetAtt": [
                  "KubeFlowClusterDefaultCapacityInstanceRoleE883FDD5",
                  "Arn"
                ]
              },
              "\\\",\\\"username\\\":\\\"system:node:{{EC2PrivateDNSName}}\\\",\\\"groups\\\":[\\\"system:bootstrappers\\\",\\\"system:nodes\\\"]},{\\\"rolearn\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"username\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"groups\\\":[\\\"system:masters\\\"]}]\",\"mapUsers\":\"[]\",\"mapAccounts\":\"[]\"}}]"
            ]
          ]
        },

It may not be clear but it seems the config map isn't correct. It appears that the mapRoles array is array in a string instead of an array object.

apiVersion: v1
data:
  mapAccounts: '[]'
  mapRoles: '[{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]},{"rolearn":"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH","username":"system:node:{{EC2PrivateDNSName}}","groups":["system:bootstrappers","system:nodes"]},{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]}]'
  mapUsers: '[]'
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"mapAccounts":"[]","mapRoles":"[{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH\",\"username\":\"system:node:{{EC2PrivateDNSName}}\",\"groups\":[\"system:bootstrappers\",\"system:nodes\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]}]","mapUsers":"[]"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"aws-auth","namespace":"kube-system"}}
  creationTimestamp: "2020-03-08T14:19:08Z"
  name: aws-auth
  namespace: kube-system
  resourceVersion: "4538"
  selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  uid: c65c4c0b-6147-11ea-a6b1-02aa720c17c2

This is 🐛 Bug Report

@dr3s dr3s added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Mar 24, 2020
@SomayaB SomayaB added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Mar 25, 2020
@eladb
Copy link
Contributor

eladb commented Mar 31, 2020

Needs a repro

@dr3s
Copy link
Author

dr3s commented Apr 1, 2020

kubeflow-eks.zip

@dr3s
Copy link
Author

dr3s commented Apr 1, 2020

Also see Case ID 6860089261

@eladb
Copy link
Contributor

eladb commented Apr 12, 2020

mapRoles is expected to be an array encoded inside a string.

I am unable to reproduce this:

  1. Created a new IAM role with a trust policy that allowed me to assume it (e.g. trust the current account).
  2. Reference this role as mastersRole: Role.fromArn(...).
  3. Deploy the cluster.

Then, execute the following command to update k8s configuration:

aws eks update-kubeconfig --name <CLUSTER-NAME> --region us-east-2 --role-arn <ROLE-ARN>

Then:

kubectl get configmap/aws-auth -n kube-system -o yaml

Returns the expected aws-auth configuration.

I am closing for now. Reopen when you have additional information.

@eladb eladb closed this as completed Apr 12, 2020
@dr3s
Copy link
Author

dr3s commented Apr 12, 2020

The string vs array comment was from Amazon premium support. They have since said they were mistaken.

Have you looked at the code I attached and its output of cdk synth?

The steps I have to do are different and maybe that's related:

  1. Using SAML roles and sts. I cannot use an IAM user and must log in with sso and get an assumed role.
  2. It is this assumed role that I'm setting as the master role.
  3. I cannot do a config map get despite setting up the k8s context with the same role. The only way I'm to use kubectl or eksctl is to assume the role created with the cluster by the cdk.

AWS said it was a problem with the config map but now they have recanted. They instructed me to open this issue. I really have no idea but the code I attached is pretty simple and does not work for the flow I described.

@eladb eladb reopened this Apr 12, 2020
@eladb
Copy link
Contributor

eladb commented Apr 12, 2020

What is the output you are getting when you run kubectl get all?

@dr3s
Copy link
Author

dr3s commented Apr 13, 2020 via email

@eladb
Copy link
Contributor

eladb commented Apr 13, 2020

Can you paste the aws eks update-kubeconfig command you are executing?

@eladb eladb added the p1 label Apr 13, 2020
@dr3s
Copy link
Author

dr3s commented Apr 14, 2020

https://console.aws.amazon.com/support/home?region=eu-west-1#/case/?displayId=6860089261&language=en

(base) ➜ kubeflow-eks git:(master) ✗ kubectl get configmap aws-auth -n kube-system -o yaml
apiVersion: v1
data:
mapAccounts: '[]'
mapRoles: '[{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]},{"rolearn":"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH","username":"system:node:{{EC2PrivateDNSName}}","groups":["system:bootstrappers","system:nodes"]},{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]}]'
mapUsers: '[]'
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"mapAccounts":"[]","mapRoles":"[{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]},{"rolearn":"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH","username":"system:node:{{EC2PrivateDNSName}}","groups":["system:bootstrappers","system:nodes"]},{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]}]","mapUsers":"[]"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"aws-auth","namespace":"kube-system"}}
creationTimestamp: "2020-03-08T14:19:08Z"
name: aws-auth
namespace: kube-system
resourceVersion: "4538"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: c65c4c0b-6147-11ea-a6b1-02aa720c17c2

This is the CF template section generated by CDK for the awsauth:

"KubeFlowClusterAwsAuthmanifest4ABE9919": {
"Type": "Custom::AWSCDK-EKS-KubernetesResource",
"Properties": {
"ServiceToken": {
"Fn::GetAtt": [
"awscdkawseksKubectlProviderNestedStackawscdkawseksKubectlProviderNestedStackResourceA7AEBA6B",
"Outputs.KubeflowEksDevawscdkawseksKubectlProviderframeworkonEventA20B6922Arn"
]
},
"Manifest": {
"Fn::Join": [
"",
[
"[{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"aws-auth","namespace":"kube-system"},"data":{"mapRoles":"[{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]},{\"rolearn\":\"",
{
"Fn::GetAtt": [
"KubeFlowClusterDefaultCapacityInstanceRoleE883FDD5",
"Arn"
]
},
"\",\"username\":\"system:node:{{EC2PrivateDNSName}}\",\"groups\":[\"system:bootstrappers\",\"system:nodes\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]}]","mapUsers":"[]","mapAccounts":"[]"}}]"
]
]
},

(base) ➜ kubeflow-eks git:(master) ✗ aws sts get-caller-identity
{
"Account": "674300753731",
"UserId": "AROAIXSWYIIDLDMHO5GPO:amarch",
"Arn": "arn:aws:sts::674300753731:assumed-role/aws-vbumodelscoring-management-team/amarch"
}

(base) ➜ kubeflow-eks git:(master) ✗ aws eks update-kubeconfig --name KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 --region eu-west-1 --role-arn arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team --profile vbumodelscoring-admin
Updated context arn:aws:eks:eu-west-1:674300753731:cluster/KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 in /Users/amarch/.kube/config

(base) ➜ kubeflow-eks git:(master) ✗ aws-iam-authenticator token -i KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3
{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1alpha1","spec":{},"status":{"expirationTimestamp":"2020-03-08T15:13:27Z","token":"k8s-aws-v1.blahblahblah}}

Verification of the token works but yet I cannot login to EKS:
(base) ➜ kubeflow-eks git:(master) ✗ kubectl get nodes
error: You must be logged in to the server (Unauthorized)
(base) ➜ kubeflow-eks git:(master) ✗ kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: You must be logged in to the server (Unauthorized)
(base) ➜ kubeflow-eks git:(master) ✗ kubectl cluster-info dump
error: You must be logged in to the server (Unauthorized)

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get cluster
NAME REGION
KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 eu-west-1

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get iamidentitymapping --cluster KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3
Error: getting auth ConfigMap: Unauthorized

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get fargateprofile --cluster KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3
NAME SELECTOR_NAMESPACE SELECTOR_LABELS POD_EXECUTION_ROLE_ARN SUBNETS
KubeFlowClusterfargateprofileD-e2c227b8dbf1453db48021da16e9ebb4 default arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterfargateprofileDefau-1JAXROWG84BPR subnet-cdd02aaa,subnet-97f731de,subnet-a33137fb

@SomayaB SomayaB removed the needs-triage This issue or PR still needs to be triaged. label Apr 14, 2020
@farshadniayeshpour
Copy link

I had the same issue. I deleted the cluster and redeployed and I could log into the cluster with kubectl.

@dr3s
Copy link
Author

dr3s commented Apr 16, 2020

If you have an example of a CDK stack the works with assumedroles from SAML as described in the flow here, I would be very grateful:
#6982 (comment)

I have tried a lot of variations and haven't found a solution other than assuming the role the stack creates for the cluster.

@farshadniayeshpour
Copy link

@dr3s I can email you the script

@eladb
Copy link
Contributor

eladb commented Apr 18, 2020

@FarshadNiayesh Would be great if you can share some details for future generations...

@farshadniayeshpour
Copy link

farshadniayeshpour commented Apr 22, 2020

@dr3s @eladb So this is the code I am using:

    def eks_iam_roles(self):
    cluster_admin_role = iam.Role(self, f"cluster-admin-role-{self.ENVIRONMENT}",
                                  role_name=f"KubernetesAdmin-{self.ENVIRONMENT}",
                                  assumed_by=iam.AccountRootPrincipal())

    admin_policy_statement = iam.PolicyStatement(resources=[cluster_admin_role.role_arn],
                                                 actions=[
                                                     "sts:AssumeRole"],
                                                 effect=iam.Effect.ALLOW)

    assume_EKS_admin_role = iam.ManagedPolicy(self, f"assume-eks-admin-role-{self.ENVIRONMENT}",
                                              managed_policy_name=f"assume-KubernetesAdmin-role-{self.ENVIRONMENT}")

    assume_EKS_admin_role.add_statements(admin_policy_statement)
    eks_cluster_role = iam.Role(self, f"eks-role-{self.ENVIRONMENT}",
                                assumed_by=iam.ServicePrincipal(
                                    "eks.amazonaws.com"),
                                managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEKSServicePolicy"),
                                                  iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEKSClusterPolicy")])

    eks_master_role = iam.Role(self, f"eks-cluster-admin-{self.ENVIRONMENT}",
                               assumed_by=iam.AccountRootPrincipal())
                               
    return cluster_admin_role, eks_cluster_role, eks_master_role


def eks_cluster(self, cluster_name, eks_master_role, eks_cluster_role, vpc=None, subnets=None, security_group=None, default_capacity=0, default_capacity_instance="r5.large"):
    
    if vpc:
        eks_cluster = eks.Cluster(self, f"{os.environ['APP_NAME']}-cluster-{self.ENVIRONMENT}",
                                  default_capacity=default_capacity,
                                #   default_capacity_instance=ec2.InstanceType(default_capacity_instance),
                                  kubectl_enabled=True,
                                  cluster_name=f"{cluster_name}-{self.ENVIRONMENT}",
                                  masters_role=eks_master_role,
                                  role=eks_cluster_role,
                                #   security_group = eks_security_group, 
                                  vpc=vpc,
                                  output_cluster_name= True, 
                                  output_masters_role_arn = True, 
                                  vpc_subnets=[ec2.SubnetSelection(subnets=subnets)])
    else:
        eks_cluster = eks.Cluster(self, f"{os.environ['APP_NAME']}-cluster-{self.ENVIRONMENT}",
                                  default_capacity=default_capacity,
                                  default_capacity_instance = ec2.InstanceType(default_capacity_instance),
                                  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
                                  masters_role=eks_master_role,
                                  output_cluster_name=True,
                                  output_config_command=True,
                                  output_masters_role_arn=True,
                                  role=eks_cluster_role,
                                #   security_group=eks_security_group
                                  ## if you want to create public load balancers, this must include public subnets.
                                #   vpc_subnets=[ec2.SubnetSelection(subent_type=ec2.SubnetType.PRIVATE)]
                                  )
    
    return eks_cluster`

    cluster_admin_role, eks_cluster_role, eks_master_role  = self.eks_iam_roles()

    # Creates the kubernetes cluster
    eks_cluster = self.eks_cluster(cluster_name="rapid-prototyping-tool-cluster",
                                  eks_master_role=eks_master_role,
                                  eks_cluster_role=eks_cluster_role,
                                #   vpc=rpt_vpc,
                                #   subnets=private_subnets,
                                #   security_group=eks_sg
                                  )
    ## Add managed nodegroup to this Amazon EKS cluster.
    ## This method will create a new managed nodegroup and add into the capacity.
    eks_cluster.add_nodegroup(
        id='managed-nodegroup', 
        desired_size=int(os.environ["APP_DESIRED_CAPACITY"]),
        disk_size=int(os.environ["APP_DISK_SIZE"]), 
        instance_type=ec2.InstanceType(os.environ["APP_INSTANCE_TYPE"]), 
        max_size=int(os.environ["APP_MAX_CAPACITY"]), 
        min_size=int(os.environ["APP_MIN_CAPACITY"]), 
        nodegroup_name=f'eks-{os.environ["APP_NAME"]}-nodegroup', 
        remote_access=eks.NodegroupRemoteAccess(ssh_key_name=f'rpt-production-key-{self.ENVIRONMENT}', 
        # source_security_groups=[eks_sg]
        ),
        subnets = ec2.SubnetSelection(subnets=eks_cluster.vpc.private_subnets)
        )
    aws_auth = eks.AwsAuth(self, 'awsAuthId', cluster=eks_cluster)

    aws_auth.add_masters_role(cluster_admin_role, username=f"k8s-cluster-admin-user-{self.ENVIRONMENT}")`

Is this something you were looking for?

After the stack is deployed I just use the aws eks kubeconfig update command with the -r option set to the proper role.

@dr3s
Copy link
Author

dr3s commented Apr 22, 2020

Thanks @FarshadNiayesh. I don't know how yours is different than what I wrote above except the role you are using.

My example is specifically with using an assumed role via SAML that is already created. I'm loading it in the cdk via its ARN.

You seem to be creating a role in CDK for the cluster. This should be similar to the role that's created by default and assigned to the cluster nodes. I don't have any issue assuming this role and managing the cluster, so I wouldn't expect that I would have an issue with your stack.

I'll give it a try but I don't think it addresses my root issue.

@dr3s
Copy link
Author

dr3s commented Apr 24, 2020

got it narrowed down. this works:

const clusterAdmin = new iam.Role(this, `eks-cluster-admin-${id}`, {
   assumedBy: new iam.AccountRootPrincipal(),
});

const cluster = new eks.Cluster(this, "KubeFlowCluster", {
  defaultCapacity: 3,
  defaultCapacityInstance: new ec2.InstanceType("t3.large"),
  mastersRole: clusterAdmin,
  vpc: vpc,
  vpcSubnets: [{ subnets: vpc.privateSubnets }],
});

this doesn't work:


  const clusterAdmin = iam.Role.fromRoleArn(
      this,
      `adminRole-${id}`,
      "arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team"
    );

    const cluster = new eks.Cluster(this, "KubeFlowCluster", {
      defaultCapacity: 3,
      defaultCapacityInstance: new ec2.InstanceType("t3.large"),
      mastersRole: clusterAdmin,
      vpc: vpc,
      vpcSubnets: [{ subnets: vpc.privateSubnets }],
    });

I think that it has to do with the role being SAML. I don't know why the Trusted Entities of the role would make a difference. I'll update the title of the issue to be more specific but I'm at a loss. It's possible that this has more to do with EKS than the CDK.

@dr3s dr3s changed the title Can't log into fresh EKS cluster with mastersRole Can't log into fresh EKS cluster with SAML mastersRole Apr 24, 2020
@dr3s
Copy link
Author

dr3s commented Jun 2, 2020

Based upon experimentation, I have found it works if I do two things:

  • create a role rather than use the SAML role directly
  • Setting the aws auth mapping before declaring the node group
const clusterAdmin = new iam.Role(this, `eks-cluster-admin-${id}`, {
      assumedBy: new iam.AccountRootPrincipal(),
    });

const cluster = new eks.Cluster(this, "FeastCluster", {
      defaultCapacity: 0,
      mastersRole: clusterAdmin,
      vpc: vpc,
      vpcSubnets: [{ subnets: vpc.privateSubnets }],
    });

cluster.awsAuth.addMastersRole(clusterAdmin);

cluster.addNodegroup("NGDefault", {
      instanceType: new ec2.InstanceType("t3.large"),
      diskSize: 100,
      minSize: 3,
      maxSize: 6,
    });

@eladb eladb added the effort/small Small work item – less than a day of effort label Aug 4, 2020
@iliapolo iliapolo changed the title Can't log into fresh EKS cluster with SAML mastersRole [aws-eks] Can't log into fresh EKS cluster with SAML mastersRole Aug 16, 2020
@eladb eladb removed their assignment Feb 25, 2021
@iliapolo iliapolo removed their assignment Jun 27, 2021
@rix0rrr rix0rrr assigned otaviomacedo and unassigned otaviomacedo Nov 24, 2021
@otaviomacedo
Copy link
Contributor

@dr3s this seems like an EKS problem. Can you provide us with the EKS cluster arn, so that the EKS team can investigate this further?

@otaviomacedo otaviomacedo added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Dec 16, 2021
@github-actions
Copy link

This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.

@github-actions github-actions bot added closing-soon This issue will automatically close in 4 days unless further comments are made. closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. and removed closing-soon This issue will automatically close in 4 days unless further comments are made. labels Dec 16, 2021
@boscowitch
Copy link

boscowitch commented Apr 19, 2024

This is still a bug and I think it is from CDK, since simply setting up the cluster with terraform with the access key tokens ect. from my SAML sso users assumed role, just works and i have console UI access to resources of the cluster without any additional role switching/assuming ect (btw terraform took me 1 week less work up till now and the cdk version still not resolved or equivalent to it :( since cdk also seems to not be able to tag existing subnets but thats another story).

Using a newly created role with (and the assume policy ect from above) works:

  const clusterAdmin = new Role(this, `eks-cluster-admin-${id}`, {
      assumedBy: new iam.AccountRootPrincipal(),
    });
....

however thats more like a hacky workaround for sso logins since in the ui console and cli switching is a bit of a nuisance and handling of long arns....

especially since its clearly possible with terraform and its eks module (or manual setup of eks).

why can't cdk not simply assume the sso role correctly or just use the tokens I provide directly, this would make the use of cdk/eks with SSO extremly more simple.

especially since our SSO role already was created for the purpose of EKSadmin:
arn:aws:iam::XXXXXXXXXXXX:role/aws-reserved/sso.amazonaws.com/eu-central-1/AWSReservedSSO_EKSAdmin_XXXXXXXXXXXXXXXX

PS: companies seem to employ sso for aws accounts more and more and a centalized management of roles/subnets ect. this makes this even more important since the dev/deployment time gets increased immensly with this lacking support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. effort/small Small work item – less than a day of effort p1 response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days.
Projects
None yet
Development

No branches or pull requests

7 participants