-
Notifications
You must be signed in to change notification settings - Fork 597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[all] Consider switching to (mirroring to) GCR #1540
Comments
I am not aware of any official announcement describing that kubernetes images should be maintained in GCR (please correct if I missed something). We are very happy to switch to GCR if there is mechanism could be easily integrated with CI or even manual process sounds great. For production environment, I would recommend mirroring the images on your private image registry to work around the rate limit concern. |
I am suffering this as well so I have to wait long time ... so if we can move away from docker hub and use repo with out pull limit will be great.. @lingxiankong is it a big burden and effort for us to move to GCR (e.g CI dependency, code repo etc ? ) |
As I said:
As far as I know, GCR is not free, maybe kubernetes community has some contract with google, but I'm not sure. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
adding the comment here for doc reference . |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is also a potential governance issue, btw: how is access to the dockerhub repo managed? |
I've submitted kubernetes/test-infra#28817 to run the builds. |
@mdbooth does it mean that we can now pull the OCCM and cinder-csi-driver images from GCR? |
Not immediately, but hopefully the next release will be available from registry.k8s.io. |
Thanks! I will reopen the issue until the images are available in /reopen |
@ialidzhikov: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is done! The first images are available as /close |
@mdbooth: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…nager` and `cinder-csi-plugin` With kubernetes/cloud-provider-openstack#1540 `openstack-cloud-controller-manager` and `cinder-csi-plugin` images are pushed to GCR. Hence, we do no longer need to maintain our copies.
@zetaab I now see that there are many available versions available versions (v1.24.6, v1.25.5, v1.26.2) in GCR - ref https://github.com/kubernetes/k8s.io/blob/main/k8s.gcr.io/images/k8s-staging-provider-os/images.yaml |
@ialidzhikov the official support policy in k8s is last 3 minors. Also release-1.23 and release-1.22 are both missing CI pipelines, at least I am not volunteer to start building releases manually or backport CI pipelines to that old releases. |
Okay, thanks! |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
/sig cloud-provider
/area provider/openstack
What happened:
Currently cloud provider openstack related images are maintained in Docker Hub.
In some environments we face Docker Hub rate limits while pulling OCCM or the Cinder CSI driver:
Last year Docker Hub restricted the amount of image pulls for anonymous and free users. From https://www.docker.com/increase-rate-limits:
After this announcement multiple projects switched to GCR. For example as a mitigation to the Docker Hub rate limits, Istio provides official mirrors to GCR - see https://istio.io/latest/blog/2020/docker-rate-limit/.
I am not sure why cloud provider openstack maintains the images in Docker Hub, probably there are multiple reasons that I am not aware of. From outside, as user of the cloud provider openstack images, it looks a little bit confusing. Currently main K8s components are maintained under GCR (for example
k8s.gcr.io/kube-apiserver
,k8s.gcr.io/kube-controller-manager
, etc.). I see that K8s CSI sidecars relatively recently switched to GCR as well (k8s.gcr.io/sig-storage/csi-provisioner
,k8s.gcr.io/sig-storage/csi-attacher
, etc.). I see that the AWS EBS CSI driver is maintained underk8s.gcr.io
as well -k8s.gcr.io/provider-aws/aws-ebs-csi-driver
.I am not sure whether there is central initiative to maintain the images in the official K8s GCR. But it would be great if cloud provider openstack images are maintained in GCR as well. In this way end users won't need to tackle Docker Hub rate limit issues.
Generally it would be great if maintainers can provider information about:
The text was updated successfully, but these errors were encountered: