Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

‼️ NOTICE: aws-eks "error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1" #15072

Closed
mattchrist opened this issue Jun 10, 2021 · 10 comments · Fixed by #15314
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. effort/small Small work item – less than a day of effort management/tracking Issues that track a subject or multiple issues p0

Comments

@mattchrist
Copy link
Contributor

mattchrist commented Jun 10, 2021

Please add your +1 👍 to let us know you have encountered this


Status: IN-PROGRESS

Overview:

Version 1.106.0 and later of the aws-eks construct library throw an error when trying to update a KubernetesManifest object, this includes objects used in the cluster.addManifest method.

Complete Error Message:

11:22:46 AM | UPDATE_FAILED        | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "networking.k8s.io/v1"\n'

Workaround:

Downgrade to version 1.105.0 or below


Original opening post

When updating a KubernetesManifest, the deploy fails with an error like:

11:22:46 AM | UPDATE_FAILED        | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "networking.k8s.io/v1"\n'

This issue occurs with Kubernetes versions 1.16, 1.17, and 1.20.

Reproduction Steps

  1. Deploy a simple EKS stack with a manifest
import { Stack, App } from "@aws-cdk/core";
import {
  Cluster,
  KubernetesManifest,
  KubernetesVersion,
} from "@aws-cdk/aws-eks";

const app = new App();
const stack = new Stack(app, "repro-prune-invalid-resource", {
  env: {
    region: process.env.CDK_DEFAULT_REGION,
    account: process.env.CDK_DEFAULT_ACCOUNT,
  },
});

const cluster = new Cluster(stack, "cluster", {
  clusterName: "repro-prune-invalid-resource-test",
  version: KubernetesVersion.V1_16,
  prune: true,
});

const manifest = new KubernetesManifest(stack, `pdb`, {
  cluster,
  manifest: [
    {
      apiVersion: "policy/v1beta1",
      kind: "PodDisruptionBudget",
      metadata: {
        name: "test-pdb",
        namespace: "default",
      },
      spec: {
        maxUnavailable: 1,
        selector: {
          matchLabels: { app: "thing" },
        },
      },
    },
  ],
});

app.synth();

This deploys successfully.

  1. Make a small change to the manifest, such as changing maxUnavailable: 1 to maxUnavailable: 2 and deploy again

This results in the error above.

What did you expect to happen?

I would have expected the deploy to have succeeded, and updated the maxUnavailable field in the deployed Manifest from 1 to 2.

What actually happened?

11:22:46 AM | UPDATE_FAILED        | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingr
ess" in version "networking.k8s.io/v1"\n'

Logs: /aws/lambda/repro-prune-invalid-resource-awscd-Handler886CB40B-hFxU42VXJuOz

at invokeUserFunction (/var/task/framework.js:95:19)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async onEvent (/var/task/framework.js:19:27)
at async Runtime.handler (/var/task/cfn-response.js:48:13) (RequestId: 1be7dfcb-288d-4309-8b8c-cadafb97fd09)

Environment

  • CDK CLI Version : 1.108.0
  • Framework Version: 1.108.0
  • Node.js Version: v12.18.4
  • OS : Linux
  • Language (Version): Typescript 4.3.2

Other


This is 🐛 Bug Report

@mattchrist mattchrist added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Jun 10, 2021
@github-actions github-actions bot added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Jun 10, 2021
@cweidinger
Copy link

This is also a problem for us +1. Also happens with 1_20 on our real cluster

@saggar
Copy link

saggar commented Jun 23, 2021

+1 Same error on EKS cluster v1.18. Currently blocking deployment of K8s manifest/ YAML changes via CDK.

@otaviomacedo otaviomacedo added effort/small Small work item – less than a day of effort p1 and removed needs-triage This issue or PR still needs to be triaged. labels Jun 25, 2021
@otaviomacedo
Copy link
Contributor

The recent bump in the kubectl version from 1.20.0 to 1.21.0 broke KubernetesManifest updates.

Marking this as a p1, given the other comments from people facing the same issue.

@otaviomacedo otaviomacedo removed their assignment Jun 25, 2021
@otaviomacedo otaviomacedo added p0 and removed p1 labels Jun 28, 2021
@otaviomacedo otaviomacedo changed the title (eks): can't update manifest, get error "error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1" ‼️ NOTICE: aws-eks "error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1" Jun 28, 2021
@otaviomacedo otaviomacedo added the management/tracking Issues that track a subject or multiple issues label Jun 28, 2021
@otaviomacedo otaviomacedo pinned this issue Jun 28, 2021
@mergify mergify bot closed this as completed in #15314 Jun 28, 2021
mergify bot pushed a commit that referenced this issue Jun 28, 2021
The recent [bump] in the kubectl version from 1.20.0 to 1.21.0 broke KubernetesManifest updates.

Fixes #15072.

[bump]: c7f9f97

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@iliapolo
Copy link
Contributor

We are preparing a patch release for this fix. Will update once available.

iliapolo pushed a commit that referenced this issue Jun 28, 2021
The recent [bump] in the kubectl version from 1.20.0 to 1.21.0 broke KubernetesManifest updates.

Fixes #15072.

[bump]: c7f9f97

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@NetaNir
Copy link
Contributor

NetaNir commented Jun 28, 2021

version 1.110.1 was released with the patch.

@DarkmatterVale
Copy link

DarkmatterVale commented Jun 30, 2021

Out of curiosity, why isn't the kubectl handler version always configured to be the same version as the Kubernetes cluster?

@otaviomacedo otaviomacedo unpinned this issue Jul 1, 2021
@otaviomacedo otaviomacedo removed their assignment Jul 1, 2021
hollanddd pushed a commit to hollanddd/aws-cdk that referenced this issue Aug 26, 2021
The recent [bump] in the kubectl version from 1.20.0 to 1.21.0 broke KubernetesManifest updates.

Fixes aws#15072.

[bump]: aws@c7f9f97

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@asgerjensen
Copy link

Seeing error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, N amespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n' on EKS version 1.23, using cdk 2.51.1

@robertd
Copy link
Contributor

robertd commented Nov 26, 2022

@asgerjensen Can you please paste a code snippet showing how you’re creating your cluster? Did you provide a custom kubectl layer or using defaults?

@charlesakalugwu
Copy link

charlesakalugwu commented Dec 15, 2022

@robertd I just ran into this problem as well.

Fargate EKS 1.23 built using CDK 2.53.0

Cluster looks as follows:

      kubernetes_cluster = eks.Cluster(
          self,
          id=f"{prefix}-cluster",
          version=version,
          vpc=vpc,
          vpc_subnets=[
              ec2.SubnetSelection(
                  subnet_group_name="private-subnet",
              ),
          ],
          cluster_logging=[
              eks.ClusterLoggingTypes.AUDIT,
          ],
          default_capacity=0,
          endpoint_access=eks.EndpointAccess.PUBLIC_AND_PRIVATE,
          kubectl_layer=kubectl_v23.KubectlV23Layer(self, id=f"{prefix}-kubectl"),
          masters_role=masters_role,
          output_masters_role_arn=False,
          place_cluster_handler_in_vpc=True,
          secrets_encryption_key=kms_key_data,
          output_cluster_name=False,
          output_config_command=False,
          tags=tags,
      )

As you can see I supplied the matching kubectl layer for k8s 1.23. Nevertheless I keep seeing the error:

Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/foo configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n'

I have upgraded CDK to 2.55.0 and upgraded EKS to 1.24 and I saw the error again.

@asgerjensen Did you make any progress?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. effort/small Small work item – less than a day of effort management/tracking Issues that track a subject or multiple issues p0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants