Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

admissionregistration.k8s.io/v1beta1 k8s API error when using pulumi and k8s 1.19 #1388

Closed
maorkh4 opened this issue Nov 23, 2020 · 2 comments · Fixed by #1474
Closed

admissionregistration.k8s.io/v1beta1 k8s API error when using pulumi and k8s 1.19 #1388

maorkh4 opened this issue Nov 23, 2020 · 2 comments · Fixed by #1474
Assignees
Milestone

Comments

@maorkh4
Copy link

maorkh4 commented Nov 23, 2020

Problem description

I tried to deploy an ECK operator using k8s v1.19 and the following code:

import * as k8s from '@pulumi/kubernetes';

const eckOperator = new k8s.yaml.ConfigFile('eck-operator', {
    file: 'https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml',
})

And got the following error for pulumi up:

     Type                                                                               Name                                        Plan       Info
 +   pulumi:pulumi:Stack                                                                dataplane_infra-test                        create
 +   ├─ kubernetes:yaml:ConfigFile                                                      eck-operator                                create
 +   │  ├─ kubernetes:core/v1:Namespace                                                 elastic-system                              create
 +   │  ├─ kubernetes:core/v1:Service                                                   elastic-system/elastic-webhook-server       create
 +   │  ├─ kubernetes:core/v1:ServiceAccount                                            elastic-system/elastic-operator             create
 +   │  ├─ kubernetes:core/v1:Secret                                                    elastic-system/elastic-webhook-server-cert  create
 +   │  ├─ kubernetes:core/v1:ConfigMap                                                 elastic-system/elastic-operator             create
 +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole                          elastic-operator-view                       create
 +   │  └─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole                          elastic-operator-edit                       create
     └─ kubernetes:admissionregistration.k8s.io/v1beta1:ValidatingWebhookConfiguration  elastic-webhook.k8s.elastic.co                         1 error

Diagnostics:
  kubernetes:admissionregistration.k8s.io/v1beta1:ValidatingWebhookConfiguration (elastic-webhook.k8s.elastic.co):
    error: apiVersion "admissionregistration.k8s.io/v1beta1/ValidatingWebhookConfiguration" was removed in Kubernetes 1.19. Use "admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration" instead.
    See https://git.k8s.io/kubernetes/CHANGELOG/CHANGELOG-1.19.md#deprecation-1 for more information.

This seems like an issue in ECK operator YAML, but kubectl apply -f https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml works just fine.
It looks like there's an inconsistency when using pulumi_kubernetes instead of kubectl directly.
As you can see in the k8s issue: kubernetes/kubernetes#82021 (comment), admissionregistration.k8s.io/v1beta1 removal was planned for 1.19 but was postponed to 1.22.

Affected product version(s)

Pulumi v2.14.0
@pulumi/kubernetes: 2.7.2
kubernetes: v1.19.3

@maorkh4 maorkh4 changed the title admissionregistration.k8s.io/v1beta1 k8s API error when using pulumi and k8s 1.19 admissionregistration.k8s.io/v1beta1 k8s API error when using pulumi and k8s 1.19 Nov 23, 2020
@loopingrage
Copy link

Can you add a way to disable this type of validation altogether. It's kind of crazy that Pulumi would completely block applying manifests due to deprecations.

@lblackstone
Copy link
Member

lblackstone commented Feb 20, 2021

Can you add a way to disable this type of validation altogether. It's kind of crazy that Pulumi would completely block applying manifests due to deprecations.

I'll change the error to a warning, and the cluster should return an error if the API is missing. The intent of this feature was to improve the error message for cases like this, but I didn't consider the case where upstream later changed the removal version.

Edit:

The eck-operator deploys successfully with the changes from #1475 and #1474 will address the erroneous removal message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants