-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transition from gcr.io to a modern artifact repository #15199
Comments
Having studied the release script, I see pushing to both registries substantially increases duration and resource usage of the release pipeline. The advantage is unclear to me. |
Would be good to consult what K8s is doing about this. @BenTheElder |
They've already migrated. https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/ |
registry.k8s.io is a multi-cloud hybrid system for funding reasons (that's a whole complicated topic ...), but we've also used the opportunity to move to basing Kubernetes's future image hosting on Artifact Registry, we hope to adopt some of the AR features at some point like immutable tags. registry.k8s.io basically sits in front of AR and redirects some content download traffic to other hosts. The source code is not fully reusable at the moment (shipping reliably ASAP >>> flexible configuration), but the approach is hopefully well enough documented and relatively simple. I'm not sure what overall is most appropriate for etcd, other than I would recommend GCR => AR. It's mostly a drop-in upgrade. |
I know that technically etcd isn't a kubernetes sig (right?), but it is CNCF, so maybe it should just use the kubernetes release pipeline, rather than creating a whole new one. I'd much rather we redefine the kubernetes release pipeline as the CNCF release pipeline, than require every CNCF project to stand up their own. There is a pipeline for etcd already: https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io/images/k8s-staging-etcd The process is described here (along with a background of why etcd is there - TLDR because it is bundled with k8s) This then becomes a shared problem (aka not etcd's problem), though of course anyone would be welcome to work on it. With artifacts.k8s.io, our dependency on gcr.io is pretty light anyway, and if the etcd project wants to maintain their own read-only mirror (e.g. if you have some money burning a hole in your pocket) then it's relatively easy to stand up a S3 / GCS / whatever bucket to do that. |
@justinsb I agree that it's inefficient for each project to build their own pipeline, however I don't think it's a simple as just taking K8s pipeline. Etcd image released by etcd is totally different than what etcd users would expect. It includes additional old etcd binaries, wraper scripts for purpose of running etcd in K8s. It would be great if CNCF gave us ready release tooling and maintained it for us, however reality is that we mostly depend on contributions and etcd community is not large enough to support it on our own. I have escalated problem of etcd release pipelines multiple times to both CNCF representatives and Kubernetes release people, but no luck. I'm stuck building etcd on my own laptop. |
GHCR + github actions might be worth exploring as a potentially no-cost, automated, low-maintenance option. I think some SIG subprojects in Kubernetes have done so, but I don't have first hand experience yet. I'm not sure Kubernetes is in a position to be offering to host the entire CNCF (considering our existing budget overruns...) ... but for etcd in particular there is probably an argument to be made, we'd need to bring that to SIG K8s Infra and SIG Release. Otherwise if Kubernetes is not actively hosting the infrastructure for you, I wouldn't recommend replicating all of it, especially if you're already understaffed. The approaches used are not without benefits but also not free. |
GitHub Container Registry isn't configured for IPv6 either. |
Tell me more @serathius and I might be able to make that monkey paw finger curl 🙃 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions. |
@justinsb @serathius @ahrtr Is this something that can be revisited these days? |
etcd has already become a Kubernetes SIG, how do other SIGs maintain their images? Can we just follow the similar way to do this? We need someone to drive this effort. |
Let's chat separately and see if I can help with this. |
What would you like to be added?
The Google Container Registry is deprecated. Transitioning within the Google ecosystem, to their Artifact Registry, is described on https://cloud.google.com/artifact-registry/docs/transition/transition-from-gcr.
Alternatively, only use Quay.
Why is this needed?
A pressing problem this would solve is that the Artifact Registry is reachable over IPv6, whereas the Container Registry isn't.
The text was updated successfully, but these errors were encountered: