You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to deploy an ECK operator using k8s v1.19 and the following code:
import * as k8s from '@pulumi/kubernetes';
const eckOperator = new k8s.yaml.ConfigFile('eck-operator', {
file: 'https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml',
})
And got the following error for pulumi up:
Type Name Plan Info
+ pulumi:pulumi:Stack dataplane_infra-test create
+ ├─ kubernetes:yaml:ConfigFile eck-operator create
+ │ ├─ kubernetes:core/v1:Namespace elastic-system create
+ │ ├─ kubernetes:core/v1:Service elastic-system/elastic-webhook-server create
+ │ ├─ kubernetes:core/v1:ServiceAccount elastic-system/elastic-operator create
+ │ ├─ kubernetes:core/v1:Secret elastic-system/elastic-webhook-server-cert create
+ │ ├─ kubernetes:core/v1:ConfigMap elastic-system/elastic-operator create
+ │ ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole elastic-operator-view create
+ │ └─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole elastic-operator-edit create
└─ kubernetes:admissionregistration.k8s.io/v1beta1:ValidatingWebhookConfiguration elastic-webhook.k8s.elastic.co 1 error
Diagnostics:
kubernetes:admissionregistration.k8s.io/v1beta1:ValidatingWebhookConfiguration (elastic-webhook.k8s.elastic.co):
error: apiVersion "admissionregistration.k8s.io/v1beta1/ValidatingWebhookConfiguration" was removed in Kubernetes 1.19. Use "admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration" instead.
See https://git.k8s.io/kubernetes/CHANGELOG/CHANGELOG-1.19.md#deprecation-1 for more information.
This seems like an issue in ECK operator YAML, but kubectl apply -f https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml works just fine.
It looks like there's an inconsistency when using pulumi_kubernetes instead of kubectl directly.
As you can see in the k8s issue: kubernetes/kubernetes#82021 (comment), admissionregistration.k8s.io/v1beta1 removal was planned for 1.19 but was postponed to 1.22.
maorkh4
changed the title
admissionregistration.k8s.io/v1beta1 k8s API error when using pulumi and k8s 1.19
admissionregistration.k8s.io/v1beta1 k8s API error when using pulumi and k8s 1.19
Nov 23, 2020
Can you add a way to disable this type of validation altogether. It's kind of crazy that Pulumi would completely block applying manifests due to deprecations.
Can you add a way to disable this type of validation altogether. It's kind of crazy that Pulumi would completely block applying manifests due to deprecations.
I'll change the error to a warning, and the cluster should return an error if the API is missing. The intent of this feature was to improve the error message for cases like this, but I didn't consider the case where upstream later changed the removal version.
Edit:
The eck-operator deploys successfully with the changes from #1475 and #1474 will address the erroneous removal message.
Problem description
I tried to deploy an ECK operator using k8s v1.19 and the following code:
And got the following error for
pulumi up
:This seems like an issue in ECK operator YAML, but
kubectl apply -f https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml
works just fine.It looks like there's an inconsistency when using pulumi_kubernetes instead of kubectl directly.
As you can see in the k8s issue: kubernetes/kubernetes#82021 (comment),
admissionregistration.k8s.io/v1beta1
removal was planned for 1.19 but was postponed to 1.22.Affected product version(s)
Pulumi v2.14.0
@pulumi/kubernetes: 2.7.2
kubernetes: v1.19.3
The text was updated successfully, but these errors were encountered: