Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

official image mirror #5573

Closed
FlorianLudwig opened this issue May 19, 2020 · 14 comments · Fixed by #5578
Closed

official image mirror #5573

FlorianLudwig opened this issue May 19, 2020 · 14 comments · Fixed by #5578
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@FlorianLudwig
Copy link

FlorianLudwig commented May 19, 2020

Would be great to have an official mirror for the image. Right now quay.io is down and there is no official backup mirror to pull the image from.

/kind feature

@FlorianLudwig FlorianLudwig added the kind/feature Categorizes issue or PR as related to a new feature. label May 19, 2020
@zackijack
Copy link
Contributor

zackijack commented May 19, 2020

@FlorianLudwig for now, you should probably use bitnami on docker hub: https://hub.docker.com/r/bitnami/nginx-ingress-controller/

kubectl set image deployment/nginx-ingress-controller \
  nginx-ingress-controller=docker.io/bitnami/nginx-ingress-controller:0.32.0 -n ingress-nginx

@FlorianLudwig
Copy link
Author

@zackijack It seems they are not a drop-in replacements, so I would not recommend that

@max-rocket-internet
Copy link

quay.io has been unreliable for a while now.

Can we just switch to GCR? Or publish to both?

@Marusyk
Copy link

Marusyk commented May 19, 2020

🔥🔥🔥🔥🔥🔥 It still returns 500 Internal Server Error 🔥🔥🔥🔥🔥🔥
@aledbf @ElvinEfendi

@aledbf
Copy link
Member

aledbf commented May 19, 2020

quay.io has been unreliable for a while now.

Do you have more information about this?

Can we just switch to GCR? Or publish to both?

Switch? No. Both, yes. I am getting information about the procedure to publish and use gcr.io

@Marusyk
Copy link

Marusyk commented May 19, 2020

k8s says something like:

Failed to pull image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/kubernetes-ingress-controller/nginx-ingress-controller/manifests/0.30.0: Get https://quay.io/v2/auth?scope=repository%3Akubernetes-ingress-controller%2Fnginx-ingress-controller%3Apull&service=quay.io: net/http: 500 InternalServerError (Client.Timeout exceeded while awaiting headers)
Failed to pull image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/kubernetes-ingress-controller/nginx-ingress-controller/manifests/0.30.0: Get https://quay.io/v2/auth?scope=repository%3Akubernetes-ingress-controller%2Fnginx-ingress-controller%3Apull&service=quay.io: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

image

@timgestson
Copy link

Do you have more information about this?

https://status.quay.io/ quays status page the last 24 hours or so. There have been many outages in the last few years many of which are full and extended outages.

@golx
Copy link

golx commented May 19, 2020

Incident on their status page.
This is quite severe actually as we were unlucky enough to upgrade our production cluster as quay.io was unavailable. We ended up with 40 minutes of downtime because our ingress controllers failed to start on fresh nodes and all of the live traffic was cut.

This is not the issue of ingress-nginx per se, but having a backup registry would be very welcome in this case.

@aledbf
Copy link
Member

aledbf commented May 19, 2020

After #5578 the project will start pushing images to the gcr.io registry.

@aledbf
Copy link
Member

aledbf commented May 26, 2020

The k8s infra setup is now done. I am setting up the automation to start pushing to gcr.io in the next release. To be clear, this will be as a backup of quay.io, at least for now. The release notes will include the URL to pull from gcr.io.

Staging copy of the last release
gcr.io/k8s-staging-ingress-nginx/nginx-ingress-controller:0.32.0

@alexellis
Copy link

I'm here for the below as we have an app to install ingress-nginx via arkade and it was failing:

kubectl set image deployment/ingress-nginx-controller controller=gcr.io/k8s-staging-ingress-nginx/nginx-ingress-controller:0.32.0

@aledbf given the poor track record for outages recently, what arguments do you see against publishing to both repos and then setting gcr.io as the default? I don't have the full picture that you do.

@aledbf
Copy link
Member

aledbf commented May 28, 2020

I don't have the full picture that you do.

The required changes in the project, the needed tasks outside the project (test-infra) kubernetes/k8s.io#882 and kubernetes/test-infra#17652
My plan is to have everything needed to push to both repositories in the next release

@ivkos
Copy link

ivkos commented May 28, 2020

Staging copy of the last release
gcr.io/k8s-staging-ingress-nginx/nginx-ingress-controller:0.32.0

Thanks @aledbf, this was helpful since quay.io is down again

@dkapanidis
Copy link

dkapanidis commented May 28, 2020

I was also affected by the quay.io downtime on the licensing server of kubernetic.

I had a previous version of nginx-ingress-controller (v0.24.1). I tried to just upgrade the image to 0.32.0 but was not working due to change on runAsUser from 33 to 101. (#4061 (comment))

Then I got the following issue:

Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:kubernetic:pro-nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope

Finally I opted instead on deploying a fresh instance from instead of simply patching, and updated the DNS entry to point to the new controller LB service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants