Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(aws-eks): kubectl layer is not compatible with k8s v1.22.0 #19843

Closed
akefirad opened this issue Apr 10, 2022 · 27 comments
Closed

(aws-eks): kubectl layer is not compatible with k8s v1.22.0 #19843

akefirad opened this issue Apr 10, 2022 · 27 comments
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/large Large work item – several weeks of effort feature-request A feature should be added or improved. p1

Comments

@akefirad
Copy link

akefirad commented Apr 10, 2022

Describe the bug

Running an empty update on an empty EKS cluster fails while updating the resource EksClusterAwsAuthmanifest12345678 (Custom::AWSCDK-EKS-KubernetesResource).

Expected Behavior

The update should succeed.

Current Behavior

It's fails with error:

Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n' Logs: /aws/lambda/InfraMainCluster-awscdkawseksKubec-Handler886CB40B-rDGV9O3CyH7n at invokeUserFunction (/var/task/framework.js:2:6) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async onEvent (/var/task/framework.js:1:302) at async Runtime.handler (/var/task/cfn-response.js:1:1474) (RequestId: acd049fc-771c-4410-8e09-8ec4bec67813)

Reproduction Steps

This is what I did:

  1. Deploy an empty cluster:
export class EksClusterStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props: cdk.StackProps) {
    super(scope, id, props);

    const clusterAdminRole = new iam.Role(this, "ClusterAdminRole", {
      assumedBy: new iam.AccountRootPrincipal(),
    });

    const vpc = ec2.Vpc.fromLookup(this, "MainVpc", {
      vpcId: "vpc-1234567890123456789",
    });

   const cluster = new eks.Cluster(this, "EksCluster", {
      vpc: vpc,
      vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_NAT }],
      clusterName: `${id}`,
      mastersRole: clusterAdminRole,
      defaultCapacity: 0,
      version: eks.KubernetesVersion.V1_22,
    });

    cluster.addFargateProfile("DefaultProfile", {
      selectors: [{ namespace: "default" }],
    });
  }
}
  1. Add a new fargate profile
    cluster.addFargateProfile("IstioProfile", {
      selectors: [{ namespace: "istio-system" }],
    });
  1. Deploy the stack and wait for the failure.

Possible Solution

No response

Additional Information/Context

I checked the version of kubectl in the lambda handler and it's 1.20.0 which AFAIK is not compilable with cluster version 1.22.0. I'm not entirely sure how the lambda is created. I thought it matches the kubectl with whatever version the cluster has. But it seems it's not It is not the case indeed (#15736).

CDK CLI Version

2.20.0 (build 738ef49)

Framework Version

No response

Node.js Version

v16.13.0

OS

Darwin 21.3.0

Language

Typescript

Language Version

3.9.10

Other information

Similar to #15072?

@akefirad akefirad added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Apr 10, 2022
@github-actions github-actions bot added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Apr 10, 2022
@dtitenko-dev
Copy link

dtitenko-dev commented Apr 10, 2022

@akefirad Yesterday I had the same issue. As a temporary solution, you can create your own lambda layer version and pass it as a parameter to the Cluster construct. Here is my solution in python. It's just a combination of AwsCliLayer and KubectlLayer

My code building layer.zip every synth, but you can build it once you need it and save layer.zip in your repository.

assets/kubectl-layer/build.sh

#!/bin/bash
set -euo pipefail

cd $(dirname $0)

echo ">> Building AWS Lambda layer inside a docker image..."

TAG='kubectl-lambda-layer'

docker build -t ${TAG} .

echo ">> Extrating layer.zip from the build container..."
CONTAINER=$(docker run -d ${TAG} false)
docker cp ${CONTAINER}:/layer.zip layer.zip

echo ">> Stopping container..."
docker rm -f ${CONTAINER}
echo ">> layer.zip is ready"

assets/kubectl-layer/Dockerfile

# base lambda image
FROM public.ecr.aws/sam/build-python3.7

#
# versions
#

# KUBECTL_VERSION should not be changed at the moment, see https://github.com/aws/aws-cdk/issues/15736
# Version 1.21.0 is not compatible with version 1.20 (and lower) of the server.
ARG KUBECTL_VERSION=1.22.0
ARG HELM_VERSION=3.8.1

USER root
RUN mkdir -p /opt
WORKDIR /tmp

#
# tools
#

RUN yum update -y \
    && yum install -y zip unzip wget tar gzip

#
# aws cli
#

COPY requirements.txt ./
RUN python -m pip install -r requirements.txt -t /opt/awscli

# organize for self-contained usage
RUN mv /opt/awscli/bin/aws /opt/awscli

# cleanup
RUN rm -rf \
    /opt/awscli/pip* \
    /opt/awscli/setuptools* \
    /opt/awscli/awscli/examples


#
# Test that the CLI works
#

RUN yum install -y groff
RUN /opt/awscli/aws help

#
# kubectl
#

RUN mkdir -p /opt/kubectl
RUN cd /opt/kubectl && curl -LO "https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
RUN chmod +x /opt/kubectl/kubectl

#
# helm
#

RUN mkdir -p /tmp/helm && wget -qO- https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz | tar -xvz -C /tmp/helm
RUN mkdir -p /opt/helm && cp /tmp/helm/linux-amd64/helm /opt/helm/helm

#
# create the bundle
#

RUN cd /opt \
    && zip --symlinks -r ../layer.zip * \
    && echo "/layer.zip is ready" \
    && ls -alh /layer.zip;

WORKDIR /
ENTRYPOINT [ "/bin/bash" ]

assets/kubectl-layer/requirements.txt

awscli==1.22.92

kubectl_layer.py

import builtins
import typing
import subprocess

import aws_cdk as cdk

from aws_cdk import (
    aws_lambda as lambda_
)

from constructs import Construct

class KubectlLayer(lambda_.LayerVersion):

    def __init__(self, scope: Construct, construct_id: builtins.str, *,
        compatible_architectures: typing.Optional[typing.Sequence[lambda_.Architecture]] = None,
        compatible_runtimes: typing.Optional[typing.Sequence[lambda_.Runtime]] = None,
        layer_version_name: typing.Optional[builtins.str] = None,
        license: typing.Optional[builtins.str] = None,
        removal_policy: typing.Optional[cdk.RemovalPolicy] = None
    ) -> None:

        subprocess.check_call("<path to assets/kubectl-layer/build.sh>")]) # build layer.zip every run

        super().__init__(scope, construct_id,
            code=lambda_.AssetCode(
                path=asset_file("<path to created assets/kubectl-layer/layer.zip>"),
                asset_hash=cdk.FileSystem.fingerprint(
                    file_or_directory=asset_dir("<path to assets/kubectl-layer/ dir>"),
                    exclude=["*.zip"]
                )
            ),
            description="/opt/awscli/aws, /opt/kubectl/kubectl and /opt/helm/helm",
            compatible_architectures=compatible_architectures,
            compatible_runtimes=compatible_runtimes,
            layer_version_name=layer_version_name,
            license=license,
            removal_policy=removal_policy
        )

@akefirad akefirad changed the title aws-eks: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true (aws-eks): error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true Apr 10, 2022
@akefirad akefirad changed the title (aws-eks): error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true (aws-eks): kubectl layer is not compatible with k8s v1.22.0 Apr 10, 2022
@peterwoodworth peterwoodworth added p1 effort/small Small work item – less than a day of effort and removed needs-triage This issue or PR still needs to be triaged. labels Apr 21, 2022
@robertd
Copy link
Contributor

robertd commented Apr 26, 2022

@peterwoodworth Check out commit message on #20000. After talking this over with Rico we've decided that it's a much greater effort, thus it would break backward compatibility with <1.21. LMKWYT.

@peterwoodworth
Copy link
Contributor

I thought this would get auto closed once #20000 was merged 😅

No reason to keep this issue open with #20000 merged I think. Thanks for the ping

@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@peterwoodworth peterwoodworth added feature-request A feature should be added or improved. effort/large Large work item – several weeks of effort and removed bug This issue is a bug. effort/small Small work item – less than a day of effort labels May 6, 2022
@peterwoodworth
Copy link
Contributor

Reopening as a feature request

@robertd
Copy link
Contributor

robertd commented May 12, 2022

Linking aws/containers-roadmap#1595... EKS v1.23 k8s support is coming in August.

@chlunde
Copy link

chlunde commented May 25, 2022

FYI, a workaround is to set Prune to false. This of course has some side effects, but you can mitigate that by ensuring there's only one kubernetes object per manifest.

@adriantaut
Copy link
Contributor

hitting the same issue 😢

@pinlast
Copy link

pinlast commented Jun 6, 2022

same for me

steved added a commit to dominodatalab/cdk-cf-eks that referenced this issue Aug 12, 2022
* remove prune setting for eks cluster

see aws/aws-cdk#19843

* empty commit to trigger CI
@natevick
Copy link

same here. CDK version 2.37.0

@Obirah
Copy link

Obirah commented Aug 31, 2022

Solution is announced for Mid-September, see this issue.

@PavanMudigondaTR
Copy link

Have been struggling with this error for the last two days! Just today noticed this issue log. I would appreciate if AWS can provide solution. for mean time i will destroy my 1.23 stack and deploy with 1.21. I hope that works !

@leadelngalame1611
Copy link

Hitting the same issue. Can anyone from AWS please tell us when this issue will be resolved? Since we upgraded EKS to version 1.22.0 we have been facing this issue. The workaround by @chlunde works well but not on all cases. Currently we cannot create new Nodegroups because these requires an update of the aws-auth resource which keeps failing with an error message

Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n' Logs: /aws/lambda/InfraMainCluster-awscdkawseksKubec-Handler886CB40B-rDGV9O3CyH7n at invokeUserFunction (/var/task/framework.js:2:6) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async onEvent (/var/task/framework.js:1:302) at async Runtime.handler (/var/task/cfn-response.js:1:1474) (RequestId: acd049fc-771c-4410-8e09-8ec4bec67813)

currently we are blocked and can't proceed with our deployment.

@nilroy
Copy link

nilroy commented Oct 25, 2022

Hitting the same issue. Can anyone from AWS please tell us when this issue will be resolved? Since we upgraded EKS to version 1.22.0 we have been facing this issue. The workaround by @chlunde works well but not on all cases. Currently we cannot create new Nodegroups because these requires an update of the aws-auth resource which keeps failing with an error message

Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n' Logs: /aws/lambda/InfraMainCluster-awscdkawseksKubec-Handler886CB40B-rDGV9O3CyH7n at invokeUserFunction (/var/task/framework.js:2:6) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async onEvent (/var/task/framework.js:1:302) at async Runtime.handler (/var/task/cfn-response.js:1:1474) (RequestId: acd049fc-771c-4410-8e09-8ec4bec67813)

currently we are blocked and can't proceed with our deployment.

I have used the instructions posted by @Obirah and that works so far. See here

@cgarvis
Copy link
Contributor

cgarvis commented Oct 27, 2022

Release this week should have a way to use an updated kubectl layer.

@leadelngalame1611
Copy link

@cgarvis Thank you for the update. We are waiting impatiently for the Release.

@rhyswilliamsza
Copy link

Hello

I see 1.23 support has been merged! 🎉 Thanks for the effort there.

Re: KubectlV23Layer - is this still an experimental feature? We'd like to implement a V1_23 kubectl layer using the java edition, however this doesn't seem possible at this stage?

@neptune19821220
Copy link

neptune19821220 commented Nov 1, 2022

Hello,

Thank you for the new release to support EKS 1.23.

But when I deploy the stack to create EKS 1.23, I got the warning:

You created a cluster with Kubernetes Version 1.23 without specifying the kubectlLayer property. 
This may cause failures as the kubectl version provided with aws-cdk-lib is 1.20, 
which is only guaranteed to be compatible with Kubernetes versions 1.19-1.21. 
Please provide a kubectlLayer from @aws-cdk/lambda-layer-kubectl-v23.

Then I try to follow the document:

import { KubectlV23Layer } from 'aws-cdk-lib/lambda-layer-kubectl-v23';

const cluster = new eks.Cluster(this, 'hello-eks', {
  version: eks.KubernetesVersion.V1_23,
  kubectlLayer: new KubectlV23Layer(this, 'kubectl'),
});

But there seems no package lambda-layer-kubectl-v23 under aws-cdk-lib v2.50.0.
Is lambda-layer-kubectl-v23 available now?

@Obirah
Copy link

Obirah commented Nov 1, 2022

Hello,

Thank you for the new release to support EKS 1.23.

But when I deploy the stack to create EKS 1.23, I got the warning:

You created a cluster with Kubernetes Version 1.23 without specifying the kubectlLayer property. 
This may cause failures as the kubectl version provided with aws-cdk-lib is 1.20, 
which is only guaranteed to be compatible with Kubernetes versions 1.19-1.21. 
Please provide a kubectlLayer from @aws-cdk/lambda-layer-kubectl-v23.

Then I try to follow the document:

import { KubectlV23Layer } from 'aws-cdk-lib/lambda-layer-kubectl-v23';

const cluster = new eks.Cluster(this, 'hello-eks', {
  version: eks.KubernetesVersion.V1_23,
  kubectlLayer: new KubectlV23Layer(this, 'kubectl'),
});

But there seems no package lambda-layer-kubectl-v23 under aws-cdk-lib v2.50.0. Is lambda-layer-kubectl-v23 available now?

Hi, you need to add the package @aws-cdk/lambda-layer-kubectl-v23 to your (dev) dependencies and import the layer from that package.

@neptune19821220
Copy link

@Obirah
Thank you for your help.
It works now.

@transadm312
Copy link

I am using aws-cdk-go and couldn't able to find lambda-layer-kubectl-v23 in go pkg dependencies

@yo-ga
Copy link
Contributor

yo-ga commented Nov 2, 2022

I also couldn't able to import lambda_layer_kubectl_v23 in the python package (aws-cdk-lib==2.50.0)

@samhopwell
Copy link

I also couldn't able to import lambda_layer_kubectl_v23 in the python package (aws-cdk-lib==2.50.0)

There is a separate module you need to install aws-cdk.lambda-layer-kubectl-v23 then you can import from aws_cdk import lambda_layer_kubectl_v23

@cgarvis cgarvis closed this as completed Nov 6, 2022
@github-actions
Copy link

github-actions bot commented Nov 6, 2022

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@wtibbitts
Copy link

@samhopwell Will there be a v2 of this coming soon or would need to just use the v1 or build our own?

@jaredhancock31
Copy link

jaredhancock31 commented Dec 7, 2022

I am using aws-cdk-go and couldn't able to find lambda-layer-kubectl-v23 in go pkg dependencies

Any docs/guidance on how to proceed using Golang? Can't find a proper module to import...

update: the go module is buried here for anyone else hunting: https://github.com/cdklabs/awscdk-kubectl-go/tree/kubectlv22/v2.0.3/kubectlv22

@andrewbulin
Copy link

andrewbulin commented Feb 10, 2023

I am using aws-cdk-go and couldn't able to find lambda-layer-kubectl-v23 in go pkg dependencies

Any docs/guidance on how to proceed using Golang? Can't find a proper module to import...

update: the go module is buried here for anyone else hunting: https://github.com/cdklabs/awscdk-kubectl-go/tree/kubectlv22/v2.0.3/kubectlv22

Thanks @jaredhancock31 ! This helped me a lot. ^_^

If anyone needs it, here is my example implementation in Go that I tweaked using the original cdk init file go-cdk.go, and with the suggestion above to upgrade a cluster I was experimenting with.

complete code here: https://gist.github.com/andrewbulin/e23c313008372d4e5149899817bebe32

snippet here:

	cluster := awseks.NewCluster(
		stack,
		jsii.String("UpgradeMe"),
		&awseks.ClusterProps{
			Version:      awseks.KubernetesVersion_V1_22(),
			KubectlLayer: kubectlv22.NewKubectlV22Layer(stack, jsii.String("kubectl")),
			ClusterName:  jsii.String("upgrade-me"),
			ClusterLogging: &[]awseks.ClusterLoggingTypes{
				awseks.ClusterLoggingTypes_AUDIT,
			},
		},
	)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/large Large work item – several weeks of effort feature-request A feature should be added or improved. p1
Projects
None yet
Development

Successfully merging a pull request may close this issue.