Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[all] Consider switching to (mirroring to) GCR #1540

Closed
ialidzhikov opened this issue May 24, 2021 · 23 comments · Fixed by kubernetes/test-infra#28817
Closed

[all] Consider switching to (mirroring to) GCR #1540

ialidzhikov opened this issue May 24, 2021 · 23 comments · Fixed by kubernetes/test-infra#28817
Labels
area/provider/openstack Issues or PRs related to openstack provider kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider.

Comments

@ialidzhikov
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
/sig cloud-provider
/area provider/openstack

What happened:

Currently cloud provider openstack related images are maintained in Docker Hub.

In some environments we face Docker Hub rate limits while pulling OCCM or the Cinder CSI driver:

  Normal   Pulling    12s              kubelet            Pulling image "docker.io/k8scloudprovider/cinder-csi-plugin:v1.19.0"
  Warning  Failed     11s              kubelet            Failed to pull image "docker.io/k8scloudprovider/cinder-csi-plugin:v1.19.0": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     11s              kubelet            Error: ErrImagePull

Last year Docker Hub restricted the amount of image pulls for anonymous and free users. From https://www.docker.com/increase-rate-limits:

On November 20, 2020, rate limits anonymous and free authenticated use of Docker Hub went into effect. Anonymous and Free Docker Hub users are limited to 100 and 200 container image pull requests per six hours.

After this announcement multiple projects switched to GCR. For example as a mitigation to the Docker Hub rate limits, Istio provides official mirrors to GCR - see https://istio.io/latest/blog/2020/docker-rate-limit/.

I am not sure why cloud provider openstack maintains the images in Docker Hub, probably there are multiple reasons that I am not aware of. From outside, as user of the cloud provider openstack images, it looks a little bit confusing. Currently main K8s components are maintained under GCR (for example k8s.gcr.io/kube-apiserver, k8s.gcr.io/kube-controller-manager, etc.). I see that K8s CSI sidecars relatively recently switched to GCR as well (k8s.gcr.io/sig-storage/csi-provisioner, k8s.gcr.io/sig-storage/csi-attacher, etc.). I see that the AWS EBS CSI driver is maintained under k8s.gcr.io as well - k8s.gcr.io/provider-aws/aws-ebs-csi-driver.
I am not sure whether there is central initiative to maintain the images in the official K8s GCR. But it would be great if cloud provider openstack images are maintained in GCR as well. In this way end users won't need to tackle Docker Hub rate limit issues.

Generally it would be great if maintainers can provider information about:

  • Is there a central initiative that is pushing projects under git.k8s.io to use the K8s GCR?
  • Is there a plan for cloud provider openstack to maintain its images in GCR or to provide official mirror to GCR?
@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. area/provider/openstack Issues or PRs related to openstack provider labels May 24, 2021
@lingxiankong
Copy link
Contributor

I am not aware of any official announcement describing that kubernetes images should be maintained in GCR (please correct if I missed something). We are very happy to switch to GCR if there is mechanism could be easily integrated with CI or even manual process sounds great.

For production environment, I would recommend mirroring the images on your private image registry to work around the rate limit concern.

@jichenjc
Copy link
Contributor

I am suffering this as well so I have to wait long time ... so if we can move away from docker hub and use repo with out pull limit will be great.. @lingxiankong is it a big burden and effort for us to move to GCR (e.g CI dependency, code repo etc ? )

@lingxiankong
Copy link
Contributor

As I said:

We are very happy to switch to GCR if there is mechanism could be easily integrated with CI or even manual process sounds great.

As far as I know, GCR is not free, maybe kubernetes community has some contract with google, but I'm not sure.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 23, 2021
@ramineni
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 25, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 23, 2021
@ialidzhikov
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 23, 2021
@ramineni
Copy link
Contributor

ramineni commented Feb 1, 2022

adding the comment here for doc reference .
#1753 (comment)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 1, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mdbooth
Copy link
Contributor

mdbooth commented Feb 22, 2023

This is also a potential governance issue, btw: how is access to the dockerhub repo managed?

@mdbooth
Copy link
Contributor

mdbooth commented Feb 22, 2023

I've submitted kubernetes/test-infra#28817 to run the builds.

@ialidzhikov
Copy link
Contributor Author

@mdbooth does it mean that we can now pull the OCCM and cinder-csi-driver images from GCR?

@mdbooth
Copy link
Contributor

mdbooth commented Feb 28, 2023

Not immediately, but hopefully the next release will be available from registry.k8s.io.

@ialidzhikov
Copy link
Contributor Author

Thanks! I will reopen the issue until the images are available in registry.k8s.io.

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Feb 28, 2023
@k8s-ci-robot
Copy link
Contributor

@ialidzhikov: Reopened this issue.

In response to this:

Thanks! I will reopen the issue until the images are available in registry.k8s.io.

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mdbooth
Copy link
Contributor

mdbooth commented Mar 9, 2023

This is done! The first images are available as registry.k8s.io/provider-os/*:v1.27.0-alpha.0. Releases from v1.27 onwards will be available on registry.k8s.io.

/close

@k8s-ci-robot
Copy link
Contributor

@mdbooth: Closing this issue.

In response to this:

This is done! The first images are available as registry.k8s.io/provider-os/*:v1.27.0-alpha.0. Releases from v1.27 onwards will be available on registry.k8s.io.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

ialidzhikov added a commit to ialidzhikov/gardener-extension-provider-openstack that referenced this issue Mar 24, 2023
…nager` and `cinder-csi-plugin`

With kubernetes/cloud-provider-openstack#1540 `openstack-cloud-controller-manager` and `cinder-csi-plugin` images are pushed to GCR. Hence, we do no longer need to maintain our copies.
@ialidzhikov
Copy link
Contributor Author

@zetaab I now see that there are many available versions available versions (v1.24.6, v1.25.5, v1.26.2) in GCR - ref https://github.com/kubernetes/k8s.io/blob/main/k8s.gcr.io/images/k8s-staging-provider-os/images.yaml
Is it possible to push the images openstack-ccm and cinder-csi-driver for 1.23 and 1.22? I see that release-v1.23 and release-v1.22 branches have even unreleased changes (ref https://github.com/kubernetes/cloud-provider-openstack/tree/release-1.23 and https://github.com/kubernetes/cloud-provider-openstack/tree/release-1.22).

@zetaab
Copy link
Member

zetaab commented Mar 26, 2023

@ialidzhikov the official support policy in k8s is last 3 minors. Also release-1.23 and release-1.22 are both missing CI pipelines, at least I am not volunteer to start building releases manually or backport CI pipelines to that old releases.

@ialidzhikov
Copy link
Contributor Author

Okay, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/openstack Issues or PRs related to openstack provider kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants