You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The #k8s-infra-wg exists within the Kubernetes community to support the infrastructure underlying the project.
Currently the Google donated cloud-credits are largely spent on k8s.gcr.io as the primary registry container artefacts are pulled from there. It's on the order of 50-60% of the total spend.
This cost should be distributed by allowing caches or distributed mirror solutions that can be run locally by kubernetes providers / clouds and picked up by users when deploying Kubernetes and pulling the k8s.io released OCI images.
As Distribution is also a CNCF project, I think it would be a great solution, creating a cohesive Cloud Native story for registry.k8s.io as a global image/artefact distribution.
Ideally we have a solution that points to registries hosted at Google, Microsoft, Amazon, and a few others. Possibly using their local cloud specific registries, but having a clear option to deploy Harbor in a best practice and community supported manner.
It's an ongoing discussion, which is evolving but some initial ideas:
Maintain a mapping of Autonomous System Numbers for cloud providers networks that have a local mirror/cache registry.
Dynamically modifying the manifest.json to redirect to cloud local mirror (may have token issues, as most registries do not allow you to pull these blobs without auth)
An initial PoC might be:
Deploy distribution.packet.k8s.io on Packet
Deploy distribution.microsoft.k8s.io on Azure (or a registry hosted by them)
Deploy split-horizon DNS PoC : registry-dns.k8s.io
Thanks @hh
When the docker throttling conversation came up, several of folks from the various clouds and git providers outlined the general statement that customers should not be pulling their production images from a public registry, rather creating a gated import (mirror) in their local registry: Consuming Public Content
For Azure customers, we do host the images under mcr (Search for oss/) to get the kubernetes and other images we rebuild following best practices for consuming upstream content.
Due to the overall reliability of network connections and resources, the manifest and data should come from the same registry. Whether it be cloud specific like MCR, ECR public GCR, or more likely, customer specific, acr, ecr, gcr, or harbor for on-prem.
I'd also suggest the registry name should be decoupled from where the packages are referenced, so customers can configure their registry, separate from the content they wish to consume; similar to every other package manager. See Is It Time to Change How We Reference Container Images? for a discussion on how we can enable repo mappings
The #k8s-infra-wg exists within the Kubernetes community to support the infrastructure underlying the project.
Currently the Google donated cloud-credits are largely spent on
k8s.gcr.io
as the primary registry container artefacts are pulled from there. It's on the order of 50-60% of the total spend.This cost should be distributed by allowing caches or distributed mirror solutions that can be run locally by kubernetes providers / clouds and picked up by users when deploying Kubernetes and pulling the k8s.io released OCI images.
As Distribution is also a CNCF project, I think it would be a great solution, creating a cohesive Cloud Native story for registry.k8s.io as a global image/artefact distribution.
Ideally we have a solution that points to registries hosted at Google, Microsoft, Amazon, and a few others. Possibly using their local cloud specific registries, but having a clear option to deploy Harbor in a best practice and community supported manner.
It's an ongoing discussion, which is evolving but some initial ideas:
An initial PoC might be:
I have two engineers at ii.coop (@hh and @BobyMCbobs) and an evolving team at Microsoft that are keen to help.
@BobyMCbobs initial explorations for Possible implementations of a registry.k8s.io using D/Distribution:
There is a similar effort exploring Harbor goharbor/harbor#14411
The text was updated successfully, but these errors were encountered: