Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for minions to import and run docker images from tarball #1668

Closed
sflxn opened this issue Oct 9, 2014 · 13 comments
Closed

Add support for minions to import and run docker images from tarball #1668

sflxn opened this issue Oct 9, 2014 · 13 comments
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@sflxn
Copy link

sflxn commented Oct 9, 2014

The docker private registry is still not fully flushed out and many organizations do not want to upload their images to the global docker hub. For these scenarios, finding a solution to push down a tarball file and letting the minions import it into their local repository and running it is a more optimal choice. Can we add this support?

@lavalamp lavalamp added kind/support-question kind/design Categorizes issue or PR as related to design. and removed kind/support-question labels Oct 9, 2014
@sflxn
Copy link
Author

sflxn commented Oct 9, 2014

We added this support in our closed-source cluster system. We're considering moving to kubernetes. Our code is C++ and specific to our cluster system so it would not be of much use for kubernetes. Eventually, I want to open source this C++ code, but it will take awhile. If someone wants to add this to the Kubernetes golang code now, that would be awesome.

@lavalamp
Copy link
Member

lavalamp commented Oct 9, 2014

I think we're trying to avoid solving the "how to get packages to the node" problem, under the theory that docker does this for us. But this does seem like a valid concern.

I can perhaps think of other ways to solve this problem, but let's summon @thockin @brendandburns and see if they have some pre-cached thoughts on this.

@bgrant0607
Copy link
Member

/cc @timbot

@timbot
Copy link

timbot commented Oct 9, 2014

I think the "direct upload to k8s" use case is a valid one. There are a couple ways to address it within kubernetes:

  1. Run a k8s-deployment-local registry, and provide some means to push to it.
  2. Provide k8s with a URI and credentials to a remote registry, and instruct it to pull to it.
  3. Allow a direct tarball import, and have k8s do... something... with the tarball that results in it getting imported to the local deployment.

(1) is an evolving topic - the current python registry probably doesn't afford a sufficient amount of authentication/authorization control. (2) doesn't work in "push from behind the firewall" cases. (3) is a bit murky, but could possibly be solved by running a hidden deployment-local registry that relies on k8s for auth.

@thockin
Copy link
Member

thockin commented Oct 9, 2014

Actually our internal system has exactly this - a way to attach a tarball
to a job when it runs (slightly different use case, same basic design).

I would be OK with this, but we have to understand the weight of it - that
tar will get saved in etcd. I'm not sure that is OK.

On Wed, Oct 8, 2014 at 5:37 PM, Daniel Smith notifications@github.com
wrote:

I think we're trying to avoid solving the "how to get packages to the
node" problem, under the theory that docker does this for us. But this does
seem like a valid concern.

I can perhaps think of other ways to solve this problem, but let's summon
@thockin https://github.com/thockin @brendandburns
https://github.com/brendandburns and see if they have some pre-cached
thoughts on this.

Reply to this email directly or view it on GitHub
#1668 (comment)
.

@bgrant0607
Copy link
Member

/cc @smarterclayton since OpenShift supports builds, images, etc. on top of k8s
/cc @proppy since we were talking about local minion registries earlier today

@smarterclayton
Copy link
Contributor

I would argue that uploading a binary to k8s and then sending it to the nodes is the wrong way to reimplement the docker registry api.

We've spent a lot of time trying to more closely integrate the registry into a kube deployment, including things like authorization, quota, pruning, etc. our first steps are here https://github.com/openshift/docker-registry-extensions but a more detailed design doc is available at #1132. There are other use cases like making that info available to other Kube components.

We'd also like to make running a docker registry run in the cluster with proper auth permissions as easy as possible - we're working with upstream on both the new and old registry to ease that. The new registry design will be much simpler and should enable more efficient integration.

@ncdc
Copy link
Member

ncdc commented Oct 9, 2014

The 2 options for running a registry that includes auth/z are:

  1. Put a proxy in front of the registry that performs the auth/z checks
  2. Implement a custom Index

Option 1 is presumably easier, although I don't think it's as simple as just fronting the registry with a web server that protects certain routes with basic authentication. You really need to be able to validate that the client making the request is authorized to access a particular repository and image. That information isn't stored in the registry. Docker's model stores that information in an Index, but you could get away with a simple proxy that performs its own access checks (against some backend) without needing to implement a full blown Index.

I agree with @smarterclayton that I don't think uploading a binary to k8s for distribution to the nodes is the right approach.

@ncdc
Copy link
Member

ncdc commented Oct 9, 2014

OpenShift has an ImageRepository resource that corresponds to a Docker image repository in a registry. An ImageRepository will be scoped to a namespace. Users who have access to the namespace will have pull permissions for the corresponding image repository in the registry.

We'll have a proxy in front of the registry that performs the authentication and authorization checks prior to allowing push/pull requests to proceed.

We're also interested in adding additional features to the registry through custom extensions in https://github.com/openshift/docker-registry-extensions:

  • Pull by id - whenever a tag is pushed, automatically add a 2nd tag whose name is the image's id
  • Quotas - when a client wants to push a layer, check its size and make sure they don't go over their quota
  • Pruning - when an image is no longer tagged, either directly or indirectly, automatically remove it

@proppy
Copy link
Contributor

proppy commented Oct 9, 2014

See also #1319

It's already possible today to run a docker-registry locally to a pod.

If that registry point to some shared blob storage, you could run a registry locally to push there and then reference other images with localhost:5000 in the same pod manifest to pull them.

@smarterclayton
Copy link
Contributor

And with GCE volumes now in, you could use a GCE attached disk to mount the registry as well.

@bgrant0607 bgrant0607 added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Dec 4, 2014
@bgrant0607 bgrant0607 added sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/node Categorizes an issue or PR as relevant to SIG Node. and removed area/images-registry kind/design Categorizes issue or PR as related to design. team/cluster (deprecated - do not use) labels Feb 10, 2017
@roberthbailey
Copy link
Contributor

Given that there are a number of ways to do this (your own cluster startup scripts, run a daemonset to side load your custom images, create VM images with images pre-loaded, run a cluster-local docker registry), and the fact that there have been no substantial updates in over two years, I'm going to close this as obsolete.

@metrue
Copy link

metrue commented Oct 17, 2019

I'd like to share my the way (metrue/fx#315) I'm doing in fx

bertinatto pushed a commit to bertinatto/kubernetes that referenced this issue Aug 23, 2023
OCPBUGS-14301:  UPSTREAM: 117249,118189: fix TopologyCache crashes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

10 participants