Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next Step for Private Registry addon #19033

Closed
freehan opened this issue Dec 22, 2015 · 23 comments
Closed

Next Step for Private Registry addon #19033

freehan opened this issue Dec 22, 2015 · 23 comments
Labels
area/images-registry lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Comments

@freehan
Copy link
Contributor

freehan commented Dec 22, 2015

Continuing discussion in #1319

Current state:

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry

What can be done:

  1. Switch the kube-registry image storage to use GCS/S3.
  2. Expose external ip for kube-registry. Better with TLS and authentication enabled.
  3. On each node, add kube-registry as insecure-registry. This can be done by setting flag KUBE_ENABLE_INSECURE_REGISTRY=true when running kube-up.sh. With this setup, there is no need to have registry proxy on each node. Kubelet can pull image using registry service ip. It would be nice if we could use the DNS name of kube-registry service. But kube-dns currently does not work on node.
  4. Enable authentication with htpass. htpass file can be packed into a secret.
  5. Enable TLS with certificate signed by cluster CA or provided by user. Cert can be packed into a secret.

Open Questions:

  1. How to get a resolvable name for kube-registry on each node? Maybe we can add an entry in /etc/hosts for now?
  2. I tried to set up a registry with self-signed cert. If the cert uses registry ip as common name, then I will get the exact same problem described here:
    Problem in HTTPS connection moby/moby#8943
    http://stackoverflow.com/questions/23468530/use-docker-registry-with-ssl-certifictae-without-ip-sans
    It looks like the registry only works with certificates using domain names as CN. This comes back to question Unit test coverage in Kubelet is lousy. (~30%) #1
  3. What is the ideal way to interact with kube-registry from outside of the k8s cluster? Proxy? Port-forwarding? External IP? With SSL or insecure?
  4. What should be the out-of-the-box features?
@freehan freehan added kind/enhancement priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. labels Dec 22, 2015
@thockin
Copy link
Member

thockin commented Dec 29, 2015

I'm really hoping some folks from community will chime in on this. I
really want to understand what it is that people want to do with a
cluster-private registry - how do you want to use it? it keeps coming up
as a pain point for people, but we have not really pinned down exactly
what...

@smarterclayton as a seed

On Tue, Dec 22, 2015 at 3:12 PM, Minhan Xia notifications@github.com
wrote:

Continuing discussion in #1319
#1319

Current state:

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry

What can be done:

  1. Switch the kube-registry image storage to use GCS/S3.
  2. Expose external ip for kube-registry. Better with TLS and
    authentication enabled.
  3. On each node, add kube-registry as insecure-registry. This can be done
    by setting flag KUBE_ENABLE_INSECURE_REGISTRY=true when running kube-up.sh.
    With this setup, there is no need to have registry proxy on each node.
    Kubelet can pull image using registry service ip. It would be nice if we
    could use the DNS name of kube-registry service. But kube-dns currently
    does not work on node.
  4. Enable authentication with htpass. htpass file can be packed into a
    secret.
  5. Enable TLS with certificate signed by cluster CA or provided by user.
    Cert can be packed into a secret.

Open Questions:

  1. How to get a resolvable name for kube-registry on each node? Maybe we
    can add an entry in /etc/hosts for now?
  2. I tried to set up a registry with self-signed cert. If the cert uses
    registry ip as common name, then I will get the exact problem described
    here:

http://stackoverflow.com/questions/23468530/use-docker-registry-with-ssl-certifictae-without-ip-sans
moby/moby#8943 moby/moby#8943
It looks like the registry only works with certificates using domain names
as CN. This comes back to question #1
#1
3. What is the ideal way to interact with kube-registry from outside of
the k8s cluster? Proxy? Port-forwarding? External IP? With SSL or insecure?
4. What should be the out-of-the-box features?


Reply to this email directly or view it on GitHub
#19033.

@therc
Copy link
Member

therc commented Jan 1, 2016

For our use case, the primary benefit would be locality and reduction in Internet traffic. Although we're experimenting on AWS, we have a colo where we'll run containers as well. You could also envision carving a cluster in two, with only one set of machines being able to talk to the outside world (and running a sanctioned list of services that includes the registry). Ideally, multiple local registries in different availability zones or providers would sync to each other, but right now the Docker registry only has pull-through capabilities (no push!) and only toward the public Hub. Those issues seem to be on Docker's radar, though.

Even more ideally, one day the nodes themselves would transfer images among each other through a peer-to-peer protocol, using only the registries for metadata and local seeding. Just like MPM...

https://www.usenix.org/sites/default/files/conference/protected-files/lisa_2014_talk.pdf

But I'm digressing.

@freehan
Copy link
Contributor Author

freehan commented Jan 5, 2016

@therc Thanks for sharing your thoughts
Did you use the kube-registry addon to setup private registry? What level of security are you looking for?

@therc
Copy link
Member

therc commented Jan 5, 2016

I set it up manually on AWS using S3 storage, before I even had Kubernetes running. We'd want to have group-based read/write restrictions based on image path. But I guess that only works for manifests. If I understand the protocol, you can still get the blobs if you knew their paths, because a layer could theoretically belong to both an image you have access to and one that you don't (it doesn't live in an image-based path).

@christopherhein
Copy link
Member

For me it's all about getting the registry closer to the actually docker daemon. Reducing the time to pull images, making it easier to move cloud providers (reducing vendor lock in). As long as it's as simple to use as say GCR or docker hub I think it's a worthwhile investment.

My biggest concern right now is the ability to push to the registry, as that with the current implementation it makes it quite hard to push unless your on one of the nodes. and If you use docker-machine or boot2docker on osx port-forwarding won't work for you. (unless I'm missing something)

I'm less worried about TLS on the registry as long as the communication is all behind a firewall.

@freehan
Copy link
Contributor Author

freehan commented Jan 7, 2016

@christopherhein I assume you are using docker-machine on your mac with local docker host, right? Port-forwarding should work out of the box if you are on a box with native docker.(not a local vm with docker)

I found a way to make port-forwarding work with docker-machine on mac. Need to initiate port-forwarding inside the docker host. Then I am able to push image to kube-registry. The pain point is to install kubectl and configure it on docker host.

See if this works for you.

@christopherhein
Copy link
Member

@freehan thanks, yes your right docker-machine on OSX. I'll see if I can get that running and get back

@freehan
Copy link
Contributor Author

freehan commented Jan 26, 2016

@smarterclayton
Copy link
Contributor

Our use cases are slightly different in OpenShift - we hollow the guts of the registry out and have it store its manifest and tag data in etcd (behind a Kube like API) and offer additional features like referential images (tag an image into foo:bar that comes from other image repositories or from other repositories), pull-through (fetch manifest from local repository but actually pull content from the remote repository via an auth proxy), and integration with cluster auth (so users on the cluster are automatically authorized to pull images). In that respect the add-on is a very smart proxy, almost an extension of the master. Running one per node is dangerous because it exposes secret information to the nodes, and we try to endorse the content offload flow, so that wherever content is coming from it doesn't have to come through the registry. The use case is to support multiple clusters and a mix of dev and operational needs - have a central dev/ops image registry, and then allow satellite kube clusters to pull images just via their pull secrets (and leverage local object store mirroring from Swift or S3 or GCS). It also gives us deeper access into the metadata of images, so we can impose policy on the images (don't allow images to run on the cluster that haven't been scanned and had metadata attached to the image)

@jbowen93
Copy link

Our use cases are similar to @christopherhein. We need to have a good local development environment for k8s that preferably doesn't require us to call out to dockerhub/gcr/quay. Currently we're using a single node configuration with an insecure registry that we push/pull to at localhost:5000. However, it would be nice if there was an easy way of running a registry on a multi-node vagrant cluster.

@thockin
Copy link
Member

thockin commented Jan 28, 2016

@smarterclayton I don't know what to do with that. I don't think that
usage pattern is particularly mainstream.

On Tue, Jan 26, 2016 at 6:28 PM, Clayton Coleman notifications@github.com
wrote:

Our use cases are slightly different in OpenShift - we hollow the guts of
the registry out and have it store its manifest and tag data in etcd
(behind a Kube like API) and offer additional features like referential
images (tag an image into foo:bar that comes from other image repositories
or from other repositories), pull-through (fetch manifest from local
repository but actually pull content from the remote repository via an auth
proxy), and integration with cluster auth (so users on the cluster are
automatically authorized to pull images). In that respect the add-on is a
very smart proxy, almost an extension of the master. Running one per node
is dangerous because it exposes secret information to the nodes, and we try
to endorse the content offload flow, so that wherever content is coming
from it doesn't have to come through the registry. The use case is to
support multiple clusters and a mix of dev and operational needs - have a
central dev/ops image registry, and then allow satellite kube clusters to
pull images just via their pull secrets (and leverage local object store
mirroring from Swift or S3 or GCS). It also gives us deeper access into the
metadata of images, so we can impose policy on the images (don't allow
images to run on the cluster that haven't been scanned and had metadata
attached to the image)


Reply to this email directly or view it on GitHub
#19033 (comment)
.

@smarterclayton
Copy link
Contributor

smarterclayton commented Jan 28, 2016 via email

@freehan
Copy link
Contributor Author

freehan commented Jan 29, 2016

@jbowen93 We could add the KUBE_ENABLE_CLUSTER_REGISTRY flag for other k8s providers. That should make it easier to setup kube-registry. But the catch is that it will only come with emptyDir as storage backend by default for most providers

@christopherhein
Copy link
Member

Couldn't we just have other variables for storage solutions like AWS having:

CLUSTER_REGISTRY_STORAGE="s3"
CLUSTER_REGISTRY_ACCESSKEY="awsaccesskey"
CLUSTER_REGISTRY_SECRETKEY="awssecretkey"
CLUSTER_REGISTRY_REGION="us-west-1"
CLUSTER_REGISTRY_BUCKET="bucketname"

Which would just have to correlate to:

REGISTRY_STORAGE
REGISTRY_STORAGE_S3_ACCESSKEY
REGISTRY_STORAGE_S3_SECRETKEY
REGISTRY_STORAGE_S3_REGION
REGISTRY_STORAGE_S3_BUCKET

In the registry configs

@freehan
Copy link
Contributor Author

freehan commented Jan 29, 2016

@christopherhein Yes. For aws/gce, we can do something like that.

@pwais
Copy link

pwais commented Feb 17, 2016

  1. Please don't tie the private registry to an external service like GCS or S3. Optional support for those is nice, tho.
  2. It would be nice if k8s came with the pieces necessary to build and distribute Docker images inside the cluster (and only inside the cluster-- hence a private registry). If that's either included in k8s as an add-on, or as a polished and maintained piece in /examples, I think there's plenty of desire for that to happen. (How do k8s users get by WITHOUT a private registry? Assuming they're using k8s on-prem or some cloud provider w/out a registry).

@apple-corps
Copy link

I'm using the registry on a coreOS kubernetes cluster. The way I'm using it is to build and push images from a jenkins image that is running locally on the cluster. Since I don't have anything but a commodity hardware setup, I don't have the storage options of GCS or S3. It makes sense to me to tweak the registry image to accept an environment variable to configure this, if possible. That way we could set storage options accordingly.

It appears that some TLS documentation was added. I would like to support pushing and pulling from a remote docker instance. The latter is actually more useful in my case. I currently achieve getting and retreving from remote hosts by using netcat to push and pull between the underlying node host and then interact with the registry.

I'm curious if the use of the TLS certs would allow the removal of the proxy components.

Also, I think the Ingress feature of 1.2 might allow pushing without the use of NodePort, but I'm not sure.

It would be nice to take some time to mess around with the proxy more.

@bgrant0607
Copy link
Member

See also #11725.

cc @lavalamp

@lavalamp
Copy link
Member

lavalamp commented Jun 2, 2016

The thing I want a private registry for is a place to push a complete set
of test containers, so we consistently test our addons from head instead of
from whatever was built and pushed last.

Pushing to a bucket in gcr.io would work for our own runs, but is not
convenient for the tests run outside of google.

On Thu, May 26, 2016 at 4:32 PM, Brian Grant notifications@github.com
wrote:

See also #11725 #11725.

cc @lavalamp https://github.com/lavalamp


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#19033 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AAnglkCxa4eBiEmqM4RVHPer4YVFpPYCks5qFi2hgaJpZM4G6SO0
.

@namliz
Copy link

namliz commented Jun 14, 2016

I'm actually chasing something else entirely it seems -- I'd like a proper local development flow with k8s.

I'd like to be able to build experimental docker images locally as part of my development flow and instead of pushing them to docker hub or a private repository hosted on/by google/amazon, which is obviously slow and depends on your wifi, I'd like to make them available to my local k8s cluster... well, err, locally.

I could simply run a private docker registry to the side, on my laptop, and outside the kubernetes cluster, but then one has to faff around with glueing the two and it seems more portable and more logical to just stick the damn thing into the cluster.

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 16, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 15, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/images-registry lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
Development

No branches or pull requests