-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Next Step for Private Registry addon #19033
Comments
I'm really hoping some folks from community will chime in on this. I @smarterclayton as a seed On Tue, Dec 22, 2015 at 3:12 PM, Minhan Xia notifications@github.com
|
For our use case, the primary benefit would be locality and reduction in Internet traffic. Although we're experimenting on AWS, we have a colo where we'll run containers as well. You could also envision carving a cluster in two, with only one set of machines being able to talk to the outside world (and running a sanctioned list of services that includes the registry). Ideally, multiple local registries in different availability zones or providers would sync to each other, but right now the Docker registry only has pull-through capabilities (no push!) and only toward the public Hub. Those issues seem to be on Docker's radar, though. Even more ideally, one day the nodes themselves would transfer images among each other through a peer-to-peer protocol, using only the registries for metadata and local seeding. Just like MPM... https://www.usenix.org/sites/default/files/conference/protected-files/lisa_2014_talk.pdf But I'm digressing. |
@therc Thanks for sharing your thoughts |
I set it up manually on AWS using S3 storage, before I even had Kubernetes running. We'd want to have group-based read/write restrictions based on image path. But I guess that only works for manifests. If I understand the protocol, you can still get the blobs if you knew their paths, because a layer could theoretically belong to both an image you have access to and one that you don't (it doesn't live in an image-based path). |
For me it's all about getting the registry closer to the actually docker daemon. Reducing the time to pull images, making it easier to move cloud providers (reducing vendor lock in). As long as it's as simple to use as say GCR or docker hub I think it's a worthwhile investment. My biggest concern right now is the ability to push to the registry, as that with the current implementation it makes it quite hard to push unless your on one of the nodes. and If you use docker-machine or boot2docker on osx port-forwarding won't work for you. (unless I'm missing something) I'm less worried about TLS on the registry as long as the communication is all behind a firewall. |
@christopherhein I assume you are using docker-machine on your mac with local docker host, right? Port-forwarding should work out of the box if you are on a box with native docker.(not a local vm with docker) I found a way to make port-forwarding work with docker-machine on mac. Need to initiate port-forwarding inside the docker host. Then I am able to push image to kube-registry. The pain point is to install kubectl and configure it on docker host. See if this works for you. |
@freehan thanks, yes your right docker-machine on OSX. I'll see if I can get that running and get back |
A few docs for extending kube-registry |
Our use cases are slightly different in OpenShift - we hollow the guts of the registry out and have it store its manifest and tag data in etcd (behind a Kube like API) and offer additional features like referential images (tag an image into foo:bar that comes from other image repositories or from other repositories), pull-through (fetch manifest from local repository but actually pull content from the remote repository via an auth proxy), and integration with cluster auth (so users on the cluster are automatically authorized to pull images). In that respect the add-on is a very smart proxy, almost an extension of the master. Running one per node is dangerous because it exposes secret information to the nodes, and we try to endorse the content offload flow, so that wherever content is coming from it doesn't have to come through the registry. The use case is to support multiple clusters and a mix of dev and operational needs - have a central dev/ops image registry, and then allow satellite kube clusters to pull images just via their pull secrets (and leverage local object store mirroring from Swift or S3 or GCS). It also gives us deeper access into the metadata of images, so we can impose policy on the images (don't allow images to run on the cluster that haven't been scanned and had metadata attached to the image) |
Our use cases are similar to @christopherhein. We need to have a good local development environment for k8s that preferably doesn't require us to call out to dockerhub/gcr/quay. Currently we're using a single node configuration with an insecure registry that we push/pull to at localhost:5000. However, it would be nice if there was an easy way of running a registry on a multi-node vagrant cluster. |
@smarterclayton I don't know what to do with that. I don't think that On Tue, Jan 26, 2016 at 6:28 PM, Clayton Coleman notifications@github.com
|
I'll flip it around to what we get asked for - a more flexible
registry that can mix and match images from multiple external sources
and then expose them to the cluster with homogenous access control,
the ability to keep enough metadata to make image policy decisions on
the cluster (this image failed virus scan, therefore don't let it
run), and the ability to unify multiple clusters with a single
registry setup easily. Practically, we also have to enable the
long-tail of images when people use the cluster as a playground - how
do you maintain operational control over tens of thousands of images
in use without some ability to control the images that flow into the
cluster.
|
@jbowen93 We could add the |
Couldn't we just have other variables for storage solutions like AWS having:
Which would just have to correlate to:
In the registry configs |
@christopherhein Yes. For aws/gce, we can do something like that. |
|
I'm using the registry on a coreOS kubernetes cluster. The way I'm using it is to build and push images from a jenkins image that is running locally on the cluster. Since I don't have anything but a commodity hardware setup, I don't have the storage options of GCS or S3. It makes sense to me to tweak the registry image to accept an environment variable to configure this, if possible. That way we could set storage options accordingly. It appears that some TLS documentation was added. I would like to support pushing and pulling from a remote docker instance. The latter is actually more useful in my case. I currently achieve getting and retreving from remote hosts by using netcat to push and pull between the underlying node host and then interact with the registry. I'm curious if the use of the TLS certs would allow the removal of the proxy components. Also, I think the Ingress feature of 1.2 might allow pushing without the use of NodePort, but I'm not sure. It would be nice to take some time to mess around with the proxy more. |
The thing I want a private registry for is a place to push a complete set Pushing to a bucket in gcr.io would work for our own runs, but is not On Thu, May 26, 2016 at 4:32 PM, Brian Grant notifications@github.com
|
I'm actually chasing something else entirely it seems -- I'd like a proper local development flow with k8s. I'd like to be able to build experimental docker images locally as part of my development flow and instead of pushing them to docker hub or a private repository hosted on/by google/amazon, which is obviously slow and depends on your wifi, I'd like to make them available to my local k8s cluster... well, err, locally. I could simply run a private docker registry to the side, on my laptop, and outside the kubernetes cluster, but then one has to faff around with glueing the two and it seems more portable and more logical to just stick the damn thing into the cluster. |
Issues go stale after 30d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Continuing discussion in #1319
Current state:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry
What can be done:
Open Questions:
Problem in HTTPS connection moby/moby#8943
http://stackoverflow.com/questions/23468530/use-docker-registry-with-ssl-certifictae-without-ip-sans
It looks like the registry only works with certificates using domain names as CN. This comes back to question Unit test coverage in Kubelet is lousy. (~30%) #1
The text was updated successfully, but these errors were encountered: