Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when switching Ingress from v1beta1 to v1 #1734

Closed
mroloux opened this issue Sep 29, 2021 · 5 comments · Fixed by #1758
Closed

Error when switching Ingress from v1beta1 to v1 #1734

mroloux opened this issue Sep 29, 2021 · 5 comments · Fixed by #1758
Assignees
Labels
kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Milestone

Comments

@mroloux
Copy link

mroloux commented Sep 29, 2021

Hello!

  • Vote on this issue by adding a 👍 reaction
  • To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already)

Issue details

I received an email from Google to point out we're using some deprecated features, which won't be supported by Kubernetes 1.22. So I'm trying to migrate our Ingress controller from networking.k8s.io/v1beta1 to networking.k8s.io/v1.

While applying the new config for the first time, I get an error. Applying it a second time works.

Steps to reproduce

  1. create an Ingress controller such as:
export function createIngress(serviceName: string, ip: GlobalAddress, managedCertificate: k8s.apiextensions.CustomResource, nodePort: k8s.core.v1.Service, clusterProvider: k8s.Provider) {
    const ingress = new k8s.networking.v1beta1.Ingress(serviceName + '-ingress', {
            apiVersion: "networking.k8s.io/v1beta1",
            metadata: {
                annotations: {
                    "kubernetes.io/ingress.global-static-ip-name": ip.name,
                    "networking.gke.io/managed-certificates": managedCertificate.metadata.name,
                    "kubernetes.io/ingress.allow-http": "false"
                }
            },
            spec: {
                backend: {
                    serviceName: nodePort.metadata.name,
                    servicePort: 80
                }
            }
        },
        {
            provider: clusterProvider,
            protect: true
        })
}
  1. change the Ingress to not use the beta API:
export function createIngress(serviceName: string, ip: GlobalAddress, managedCertificate: k8s.apiextensions.CustomResource, nodePort: k8s.core.v1.Service, clusterProvider: k8s.Provider) {
    new k8s.networking.v1.Ingress(serviceName + '-ingress', {
            metadata: {
                annotations: {
                    "kubernetes.io/ingress.global-static-ip-name": ip.name,
                    "networking.gke.io/managed-certificates": managedCertificate.metadata.name,
                    "kubernetes.io/ingress.allow-http": "false"
                }
            },
            spec: {
                defaultBackend: {
                    service: {
                        name: nodePort.metadata.name,
                        port: {
                            number: 80
                        }
                    }
                }
            }
        },
        {
            provider: clusterProvider,
            protect: true
        })
}
  1. run pulumi up

Expected: pulumi up completes successfully
Actual:

error: 1 error occurred:
    	* the Kubernetes API server reported that "default/seatsio-core-v2-staging-na-ingress-e3a28qp7" failed to fully initialize or become live: Ingress.extensions "seatsio-core-v2-staging-na-ingress-e3a28qp7" is invalid: spec: Invalid value: []networking.IngressRule(nil): either `backend` or `rules` must be specified 

Running pulumi up a second time, with the same code, works.

Am I doing something wrong?

@mroloux mroloux added the kind/bug Some behavior is incorrect or out of spec label Sep 29, 2021
@mroloux mroloux changed the title Error when switching Ingress from networking.k8s.io/v1beta1 to networking.k8s.io/v1 Error when switching Ingress from v1beta1 to v1 Sep 29, 2021
@mroloux
Copy link
Author

mroloux commented Sep 29, 2021

I just found this issue: #1668 - I assume it's related?

@lblackstone
Copy link
Member

I just found this issue: #1668 - I assume it's related?

Yeah, that seems possible. Are you blocked by this issue, or are things working aside from that initial error?

@mroloux
Copy link
Author

mroloux commented Oct 4, 2021

Blocking: yes and no. I only tried upgrading our staging server. Felt wrong to trigger an error on production (who knows what might break).

But sure, technically I could ignore the error while upgrading production.

@lblackstone
Copy link
Member

Understood. I've got a potential fix in mind, but will need to do some testing to make sure it doesn't have unintended consequences. I'm hopeful that we can get this fixed in the next couple weeks if you can hold off until then.

@mroloux
Copy link
Author

mroloux commented Oct 4, 2021

Sounds good 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants