-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for minions to import and run docker images from tarball #1668
Comments
We added this support in our closed-source cluster system. We're considering moving to kubernetes. Our code is C++ and specific to our cluster system so it would not be of much use for kubernetes. Eventually, I want to open source this C++ code, but it will take awhile. If someone wants to add this to the Kubernetes golang code now, that would be awesome. |
I think we're trying to avoid solving the "how to get packages to the node" problem, under the theory that docker does this for us. But this does seem like a valid concern. I can perhaps think of other ways to solve this problem, but let's summon @thockin @brendandburns and see if they have some pre-cached thoughts on this. |
/cc @timbot |
I think the "direct upload to k8s" use case is a valid one. There are a couple ways to address it within kubernetes:
(1) is an evolving topic - the current python registry probably doesn't afford a sufficient amount of authentication/authorization control. (2) doesn't work in "push from behind the firewall" cases. (3) is a bit murky, but could possibly be solved by running a hidden deployment-local registry that relies on k8s for auth. |
Actually our internal system has exactly this - a way to attach a tarball I would be OK with this, but we have to understand the weight of it - that On Wed, Oct 8, 2014 at 5:37 PM, Daniel Smith notifications@github.com
|
/cc @smarterclayton since OpenShift supports builds, images, etc. on top of k8s |
I would argue that uploading a binary to k8s and then sending it to the nodes is the wrong way to reimplement the docker registry api. We've spent a lot of time trying to more closely integrate the registry into a kube deployment, including things like authorization, quota, pruning, etc. our first steps are here https://github.com/openshift/docker-registry-extensions but a more detailed design doc is available at #1132. There are other use cases like making that info available to other Kube components. We'd also like to make running a docker registry run in the cluster with proper auth permissions as easy as possible - we're working with upstream on both the new and old registry to ease that. The new registry design will be much simpler and should enable more efficient integration. |
The 2 options for running a registry that includes auth/z are:
Option 1 is presumably easier, although I don't think it's as simple as just fronting the registry with a web server that protects certain routes with basic authentication. You really need to be able to validate that the client making the request is authorized to access a particular repository and image. That information isn't stored in the registry. Docker's model stores that information in an Index, but you could get away with a simple proxy that performs its own access checks (against some backend) without needing to implement a full blown Index. I agree with @smarterclayton that I don't think uploading a binary to k8s for distribution to the nodes is the right approach. |
OpenShift has an ImageRepository resource that corresponds to a Docker image repository in a registry. An ImageRepository will be scoped to a namespace. Users who have access to the namespace will have pull permissions for the corresponding image repository in the registry. We'll have a proxy in front of the registry that performs the authentication and authorization checks prior to allowing push/pull requests to proceed. We're also interested in adding additional features to the registry through custom extensions in https://github.com/openshift/docker-registry-extensions:
|
See also #1319 It's already possible today to run a docker-registry locally to a pod. If that registry point to some shared blob storage, you could run a registry locally to push there and then reference other images with |
And with GCE volumes now in, you could use a GCE attached disk to mount the registry as well. |
Given that there are a number of ways to do this (your own cluster startup scripts, run a daemonset to side load your custom images, create VM images with images pre-loaded, run a cluster-local docker registry), and the fact that there have been no substantial updates in over two years, I'm going to close this as obsolete. |
I'd like to share my the way (metrue/fx#315) I'm doing in fx |
OCPBUGS-14301: UPSTREAM: 117249,118189: fix TopologyCache crashes
The docker private registry is still not fully flushed out and many organizations do not want to upload their images to the global docker hub. For these scenarios, finding a solution to push down a tarball file and letting the minions import it into their local repository and running it is a more optimal choice. Can we add this support?
The text was updated successfully, but these errors were encountered: