Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transition from gcr.io to a modern artifact repository #15199

Open
sanmai-NL opened this issue Jan 30, 2023 · 14 comments
Open

Transition from gcr.io to a modern artifact repository #15199

sanmai-NL opened this issue Jan 30, 2023 · 14 comments

Comments

@sanmai-NL
Copy link

What would you like to be added?

The Google Container Registry is deprecated. Transitioning within the Google ecosystem, to their Artifact Registry, is described on https://cloud.google.com/artifact-registry/docs/transition/transition-from-gcr.

Alternatively, only use Quay.

Why is this needed?

A pressing problem this would solve is that the Artifact Registry is reachable over IPv6, whereas the Container Registry isn't.

@sanmai-NL
Copy link
Author

Having studied the release script, I see pushing to both registries substantially increases duration and resource usage of the release pipeline. The advantage is unclear to me.

@serathius
Copy link
Member

Would be good to consult what K8s is doing about this. @BenTheElder

@sanmai-NL
Copy link
Author

@BenTheElder
Copy link

registry.k8s.io is a multi-cloud hybrid system for funding reasons (that's a whole complicated topic ...), but we've also used the opportunity to move to basing Kubernetes's future image hosting on Artifact Registry, we hope to adopt some of the AR features at some point like immutable tags.

registry.k8s.io basically sits in front of AR and redirects some content download traffic to other hosts. The source code is not fully reusable at the moment (shipping reliably ASAP >>> flexible configuration), but the approach is hopefully well enough documented and relatively simple.

I'm not sure what overall is most appropriate for etcd, other than I would recommend GCR => AR. It's mostly a drop-in upgrade.

@justinsb
Copy link
Contributor

justinsb commented Feb 4, 2023

I know that technically etcd isn't a kubernetes sig (right?), but it is CNCF, so maybe it should just use the kubernetes release pipeline, rather than creating a whole new one. I'd much rather we redefine the kubernetes release pipeline as the CNCF release pipeline, than require every CNCF project to stand up their own.

There is a pipeline for etcd already:

https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io/images/k8s-staging-etcd

The process is described here (along with a background of why etcd is there - TLDR because it is bundled with k8s)

This then becomes a shared problem (aka not etcd's problem), though of course anyone would be welcome to work on it. With artifacts.k8s.io, our dependency on gcr.io is pretty light anyway, and if the etcd project wants to maintain their own read-only mirror (e.g. if you have some money burning a hole in your pocket) then it's relatively easy to stand up a S3 / GCS / whatever bucket to do that.

@serathius
Copy link
Member

@justinsb I agree that it's inefficient for each project to build their own pipeline, however I don't think it's a simple as just taking K8s pipeline. Etcd image released by etcd is totally different than what etcd users would expect. It includes additional old etcd binaries, wraper scripts for purpose of running etcd in K8s.

It would be great if CNCF gave us ready release tooling and maintained it for us, however reality is that we mostly depend on contributions and etcd community is not large enough to support it on our own. I have escalated problem of etcd release pipelines multiple times to both CNCF representatives and Kubernetes release people, but no luck. I'm stuck building etcd on my own laptop.

@BenTheElder
Copy link

GHCR + github actions might be worth exploring as a potentially no-cost, automated, low-maintenance option. I think some SIG subprojects in Kubernetes have done so, but I don't have first hand experience yet.

I'm not sure Kubernetes is in a position to be offering to host the entire CNCF (considering our existing budget overruns...) ... but for etcd in particular there is probably an argument to be made, we'd need to bring that to SIG K8s Infra and SIG Release.

Otherwise if Kubernetes is not actively hosting the infrastructure for you, I wouldn't recommend replicating all of it, especially if you're already understaffed. The approaches used are not without benefits but also not free.

https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry

@sanmai-NL
Copy link
Author

GHCR + github actions might be worth exploring as a potentially no-cost, automated, low-maintenance option. I think some SIG subprojects in Kubernetes have done so, but I don't have first hand experience yet.

I'm not sure Kubernetes is in a position to be offering to host the entire CNCF (considering our existing budget overruns...) ... but for etcd in particular there is probably an argument to be made, we'd need to bring that to SIG K8s Infra and SIG Release.

Otherwise if Kubernetes is not actively hosting the infrastructure for you, I wouldn't recommend replicating all of it, especially if you're already understaffed. The approaches used are not without benefits but also not free.

https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry

GitHub Container Registry isn't configured for IPv6 either.

@jeefy
Copy link

jeefy commented Feb 6, 2023

It would be great if CNCF gave us ready release tooling and maintained it for us, however reality is that we mostly depend on contributions and etcd community is not large enough to support it on our own.

Tell me more @serathius and I might be able to make that monkey paw finger curl 🙃

@stale
Copy link

stale bot commented May 21, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label May 21, 2023
@ahrtr ahrtr added stage/tracked and removed stale labels May 21, 2023
@hakman
Copy link

hakman commented May 18, 2024

@justinsb @serathius @ahrtr Is this something that can be revisited these days?

@ahrtr
Copy link
Member

ahrtr commented May 18, 2024

@justinsb @serathius @ahrtr Is this something that can be revisited these days?

etcd has already become a Kubernetes SIG, how do other SIGs maintain their images? Can we just follow the similar way to do this? We need someone to drive this effort.

@hakman
Copy link

hakman commented May 18, 2024

Let's chat separately and see if I can help with this.

@ahrtr
Copy link
Member

ahrtr commented May 18, 2024

/assign @hakman

Thanks. Please feel free to let me, @jmhbnz know if you need any assistance from etcd side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

7 participants