Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Set default authentication mode to Config Map for Outposts EKS cluster #7552

Closed
awsbdau opened this issue Feb 14, 2024 · 2 comments · Fixed by #7699
Closed

[Feature] Set default authentication mode to Config Map for Outposts EKS cluster #7552

awsbdau opened this issue Feb 14, 2024 · 2 comments · Fixed by #7699
Labels
kind/feature New feature or request priority/backlog Not staffed at the moment. Help wanted.

Comments

@awsbdau
Copy link

awsbdau commented Feb 14, 2024

When using eksctl v0.171.0 (and earlier, v0.169.0) to create an Outposts EKS Local Cluster the CloudFormation template generated to create the cluster includes the following snippet to set the authenticationMode:

    "ControlPlane": {
      "Type": "AWS::EKS::Cluster",
      "Properties": {
        "AccessConfig": {
          "AuthenticationMode": "API_AND_CONFIG_MAP",
          "BootstrapClusterCreatorAdminPermissions": true
        },

However, Outposts EKS Local Clusters support only authenticationMode=CONFIG_MAP for AccessConfig:

2024-02-07 12:36:57 [✖]  AWS::EKS::Cluster/ControlPlane: CREATE_FAILED – "Resource handler returned message: \"Local Amazon EKS cluster only supports bootstrapClusterCreatorAdminPermissions=true and authenticationMode=CONFIG_MAP for AccessConfig. (Service: Eks, Status Code: 400, Request ID: 783761cd-56a0-4c3d-a43d-2fdf7fae07ae)\" (RequestToken: 7a20ebbc-6ae8-04ec-21de-36da1179bf2c, HandlerErrorCode: InvalidRequest)"
2024-02-07 12:36:57 [!]  1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console

A workaround is to set the authenticationMode directly in the Outposts EKS Local Cluster configuration by adding an accessConfig section. For example:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: lzrs-op-local-cp
  region: us-west-2
  version: "1.29"

accessConfig:
  authenticationMode: CONFIG_MAP

vpc:
  id: "vpc-11111111111111111"
  clusterEndpoints:
    privateAccess: true
  subnets:
    private:
      lzrs-outposts-private-32:
        id: "subnet-22222222222222222"
      lzrs-outposts-private-64:
        id: "subnet-33333333333333333"
      lzrs-outposts-private-80:
        id: "subnet-44444444444444444"

outpost:
  controlPlaneOutpostARN: arn:aws:outposts:us-west-2:7777777777777:outpost/op-6666666666666666
  controlPlaneInstanceType: r5.large

The environment being used when the issue was encountered is:

OS: Linux opr.blah.io 6.2.0-1017-aws #17~22.04.1-Ubuntu SMP Fri Nov 17 21:07:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
AWS CLI: aws-cli/2.15.14 Python/3.11.6 Linux/6.2.0-1017-aws exe/x86_64.ubuntu.22 prompt/off
eksctl: v0.171.0
kubectl: Client Version: v1.29.1

Possible solutions may be to adapt eksctl to detect the use of an Outposts EKS Local Cluster and always set the authenticationMode to CONFIG_MAP, or update the documentation to describe the required use of the accessConfig section when defining an Outposts EKS Local Cluster configuration.

Sanitized debug output from an attempted cluster creation follows:

admin@opr:~/lzrs-eks/opr$ eksctl create cluster -f ./lzrs-op-eks-local-cp.yaml -v 4
2024-02-14 19:31:13 [▶]  Setting credentials expiry window to 30 minutes
2024-02-14 19:31:13 [▶]  role ARN for the current session is "arn:aws:iam::454545454545:user/eks-admin"
2024-02-14 19:31:13 [ℹ]  eksctl version 0.171.0
2024-02-14 19:31:13 [ℹ]  using region us-west-2
2024-02-14 19:31:13 [✔]  using existing VPC (vpc-0000000000000) and subnets (private:map[lzrs-outposts-private-32:{subnet-12345671234567123 us-west-2a 10.204.32.0/20 0 arn:aws:outposts:us-west-2:121212121212122:outpost/op-444222999eee000} lzrs-outposts-private-64:{subnet-12345671234567144 us-west-2a 10.204.64.0/20 0 arn:aws:outposts:us-west-2:121212121212122:outpost/op-444222999eee000} lzrs-outposts-private-80:{subnet-12345671234567166 us-west-2a 10.204.80.0/24 0 arn:aws:outposts:us-west-2:121212121212122:outpost/op-444222999eee000}] public:map[])
2024-02-14 19:31:13 [!]  custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
2024-02-14 19:31:13 [ℹ]  using Kubernetes version 1.29
2024-02-14 19:31:13 [ℹ]  creating EKS cluster "lzrs-op-local-cp" in "us-west-2" region with
2024-02-14 19:31:13 [▶]  cfg.json = \
{
    "kind": "ClusterConfig",
    "apiVersion": "eksctl.io/v1alpha5",
    "metadata": {
        "name": "lzrs-op-local-cp",
        "region": "us-west-2",
        "version": "1.29"
    },
    "iam": {
        "withOIDC": false,
        "vpcResourceControllerPolicy": true
    },
    "accessConfig": {
        "authenticationMode": "API_AND_CONFIG_MAP"
    },
    "vpc": {
        "id": "vpc-0000000000000",
        "cidr": "10.204.0.0/16",
        "subnets": {
            "private": {
                "lzrs-outposts-private-32": {
                    "id": "subnet-12345671234567123",
                    "az": "us-west-2a",
                    "cidr": "10.204.32.0/20"
                },
                "lzrs-outposts-private-64": {
                    "id": "subnet-12345671234567144",
                    "az": "us-west-2a",
                    "cidr": "10.204.64.0/20"
                },
                "lzrs-outposts-private-80": {
                    "id": "subnet-12345671234567166",
                    "az": "us-west-2a",
                    "cidr": "10.204.80.0/24"
                }
            }
        },
        "manageSharedNodeSecurityGroupRules": true,
        "nat": {
            "gateway": "Single"
        },
        "clusterEndpoints": {
            "privateAccess": true,
            "publicAccess": false
        }
    },
    "privateCluster": {
        "enabled": false,
        "skipEndpointCreation": false
    },
    "availabilityZones": [
        "us-west-2a"
    ],
    "outpost": {
        "controlPlaneOutpostARN": "arn:aws:outposts:us-west-2:121212121212122:outpost/op-444222999eee000",
        "controlPlaneInstanceType": "r5.large"
    }
}

2024-02-14 19:31:13 [ℹ]  will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-02-14 19:31:13 [ℹ]  will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
2024-02-14 19:31:13 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=lzrs-op-local-cp'
2024-02-14 19:31:13 [ℹ]  Kubernetes API endpoint access will use provided values {publicAccess=false, privateAccess=true} for cluster "lzrs-op-local-cp" in "us-west-2"
2024-02-14 19:31:13 [ℹ]  CloudWatch logging will not be enabled for cluster "lzrs-op-local-cp" in "us-west-2"
2024-02-14 19:31:13 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=lzrs-op-local-cp'
2024-02-14 19:31:13 [ℹ]
2 sequential tasks: { create cluster control plane "lzrs-op-local-cp", wait for control plane to become ready
}
2024-02-14 19:31:13 [▶]  started task: create cluster control plane "lzrs-op-local-cp"
2024-02-14 19:31:13 [ℹ]  building cluster stack "eksctl-lzrs-op-local-cp-cluster"
2024-02-14 19:31:13 [▶]  CreateStackInput = &cloudformation.CreateStackInput{StackName:(*string)(0xc000e76a30), Capabilities:[]types.Capability{"CAPABILITY_IAM"}, ClientRequestToken:(*string)(nil), DisableRollback:(*bool)(0xc00087f680), EnableTerminationProtection:(*bool)(nil), NotificationARNs:[]string(nil), OnFailure:"", Parameters:[]types.Parameter(nil), ResourceTypes:[]string(nil), RetainExceptOnCreate:(*bool)(nil), RoleARN:(*string)(nil), RollbackConfiguration:(*types.RollbackConfiguration)(nil), StackPolicyBody:(*string)(nil), StackPolicyURL:(*string)(nil), Tags:[]types.Tag{types.Tag{Key:(*string)(0xc0005ecad0), Value:(*string)(0xc0005ecae0), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc0005ecaf0), Value:(*string)(0xc0005ecb00), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc0005ecb20), Value:(*string)(0xc0005ecb30), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000c42a60), Value:(*string)(0xc000c42a70), noSmithyDocumentSerde:document.NoSerde{}}}, TemplateBody:(*string)(0xc000c42a80), TemplateURL:(*string)(nil), TimeoutInMinutes:(*int32)(nil), noSmithyDocumentSerde:document.NoSerde{}}
2024-02-14 19:31:14 [ℹ]  deploying stack "eksctl-lzrs-op-local-cp-cluster"
2024-02-14 19:31:44 [ℹ]  waiting for CloudFormation stack "eksctl-lzrs-op-local-cp-cluster"
2024-02-14 19:31:44 [✖]  unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-lzrs-op-local-cp-cluster"
2024-02-14 19:31:44 [✖]  unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-lzrs-op-local-cp-cluster"
2024-02-14 19:31:44 [ℹ]  fetching stack events in attempt to troubleshoot the root cause of the failure
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroup/ClusterSharedNodeSecurityGroup: DELETE_COMPLETE
2024-02-14 19:31:44 [!]  AWS::IAM::Role/ServiceRole: DELETE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroup/ControlPlaneSecurityGroup: DELETE_COMPLETE
2024-02-14 19:31:44 [!]  AWS::EC2::SecurityGroup/ClusterSharedNodeSecurityGroup: DELETE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::IAM::Policy/PolicyELBPermissions: DELETE_COMPLETE
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroupIngress/IngressInterNodeGroupSG: DELETE_COMPLETE
2024-02-14 19:31:44 [▶]  AWS::IAM::Policy/PolicyCloudWatchMetrics: DELETE_COMPLETE
2024-02-14 19:31:44 [!]  AWS::EC2::SecurityGroup/ControlPlaneSecurityGroup: DELETE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::EKS::Cluster/ControlPlane: DELETE_COMPLETE
2024-02-14 19:31:44 [!]  AWS::IAM::Policy/PolicyCloudWatchMetrics: DELETE_IN_PROGRESS
2024-02-14 19:31:44 [!]  AWS::IAM::Policy/PolicyELBPermissions: DELETE_IN_PROGRESS
2024-02-14 19:31:44 [!]  AWS::EC2::SecurityGroupIngress/IngressInterNodeGroupSG: DELETE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::CloudFormation::Stack/eksctl-lzrs-op-local-cp-cluster: ROLLBACK_IN_PROGRESS – "The following resource(s) failed to create: [PolicyELBPermissions, ControlPlane, PolicyCloudWatchMetrics]. Rollback requested by user."
2024-02-14 19:31:44 [✖]  AWS::IAM::Policy/PolicyELBPermissions: CREATE_FAILED – "Resource creation cancelled"
2024-02-14 19:31:44 [✖]  AWS::IAM::Policy/PolicyCloudWatchMetrics: CREATE_FAILED – "Resource creation cancelled"
2024-02-14 19:31:44 [✖]  AWS::EKS::Cluster/ControlPlane: CREATE_FAILED – "Resource handler returned message: \"Local Amazon EKS cluster only supports bootstrapClusterCreatorAdminPermissions=true and authenticationMode=CONFIG_MAP for AccessConfig. (Service: Eks, Status Code: 400, Request ID: b534c044-7829-4f6a-a67f-68204595cd4b)\" (RequestToken: 53571e90-ff06-2419-7930-485b94fa1527, HandlerErrorCode: InvalidRequest)"
2024-02-14 19:31:44 [▶]  AWS::IAM::Policy/PolicyELBPermissions: CREATE_IN_PROGRESS – "Resource creation Initiated"
2024-02-14 19:31:44 [▶]  AWS::IAM::Policy/PolicyCloudWatchMetrics: CREATE_IN_PROGRESS – "Resource creation Initiated"
2024-02-14 19:31:44 [▶]  AWS::EKS::Cluster/ControlPlane: CREATE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::IAM::Policy/PolicyELBPermissions: CREATE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::IAM::Policy/PolicyCloudWatchMetrics: CREATE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::IAM::Role/ServiceRole: CREATE_COMPLETE
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroupIngress/IngressInterNodeGroupSG: CREATE_COMPLETE
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroupIngress/IngressInterNodeGroupSG: CREATE_IN_PROGRESS – "Resource creation Initiated"
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroupIngress/IngressInterNodeGroupSG: CREATE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroup/ClusterSharedNodeSecurityGroup: CREATE_COMPLETE
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroup/ControlPlaneSecurityGroup: CREATE_COMPLETE
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroup/ControlPlaneSecurityGroup: CREATE_IN_PROGRESS – "Resource creation Initiated"
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroup/ClusterSharedNodeSecurityGroup: CREATE_IN_PROGRESS – "Resource creation Initiated"
2024-02-14 19:31:44 [▶]  AWS::IAM::Role/ServiceRole: CREATE_IN_PROGRESS – "Resource creation Initiated"
2024-02-14 19:31:44 [▶]  AWS::IAM::Role/ServiceRole: CREATE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroup/ClusterSharedNodeSecurityGroup: CREATE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::EC2::SecurityGroup/ControlPlaneSecurityGroup: CREATE_IN_PROGRESS
2024-02-14 19:31:44 [▶]  AWS::CloudFormation::Stack/eksctl-lzrs-op-local-cp-cluster: CREATE_IN_PROGRESS – "User Initiated"
2024-02-14 19:31:44 [▶]  failed task: create cluster control plane "lzrs-op-local-cp" (will not run other sequential tasks)
2024-02-14 19:31:44 [!]  1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2024-02-14 19:31:44 [ℹ]  to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=lzrs-op-local-cp'
2024-02-14 19:31:44 [✖]  ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "lzrs-op-local-cp"

Copy link
Contributor

Hello awsbdau 👋 Thank you for opening an issue in eksctl project. The team will review the issue and aim to respond within 1-5 business days. Meanwhile, please read about the Contribution and Code of Conduct guidelines here. You can find out more information about eksctl on our website

Copy link
Contributor

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Mar 16, 2024
@yuxiang-zhang yuxiang-zhang added kind/feature New feature or request priority/backlog Not staffed at the moment. Help wanted. and removed kind/bug stale labels Mar 18, 2024
@yuxiang-zhang yuxiang-zhang changed the title eksctl v0.171.0 sets 'authenticationMode: API_AND_CONFIG_MAP' which is incompatible with Outposts EKS Local Clusters [Feature] Set default authentication mode to Config Map for Outposts EKS cluster Mar 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature New feature or request priority/backlog Not staffed at the moment. Help wanted.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants