Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Higher level image and deployment concepts in Kubernetes #503

Closed
smarterclayton opened this issue Jul 17, 2014 · 16 comments
Closed

Higher level image and deployment concepts in Kubernetes #503

smarterclayton opened this issue Jul 17, 2014 · 16 comments

Comments

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Jul 17, 2014

In Kubernetes, the reference from a container manifest to an image is a "name" - that name is arbitrary and it is up to the user to specific how that name interacts with their docker build and docker registry scenarios. That includes ensuring that the name and label the user uses to refer to their image is not changed accidentally (so that new images aren't introduced outside of a controlled deployment process) and that the registry DNS that hosts the images is continuously available as long as that image may be needed (see docker image discussions for how this might change).

That loose coupling is valuable for flexibility, but the lack of a concrete process leaves room for error and requires thought and control. In addition, the resolution of those names is tightly bound to the execution of the container in the Kubelet.

We think there is value in Kubernetes providing a set of higher level concepts above pods/replication controllers that can be used to create deployable units of containers. Two concepts we see as valuable are "builds" and "deployments" - the former can be used to compose new images (by leveraging the Kubernetes cluster for build slaves with resource control) and the latter can manage the process of transitioning between one set of podTemplates to another (and can be triggered by builds).

First, is this something that should be in Kubernetes? Should it be on top of Kubernetes as a separate server? Or is it something that could be optionally enabled by those who wish to work on it? We've got some ideas of how we could make this flow work really cleanly with Docker and images, but we'd want to get feedback on those ideas.

@thockin
Copy link
Member

@thockin thockin commented Jul 19, 2014

I think there's value in those abstractions, but to my naive ears they
sound like something built atop the core k8s primitives. We might still
want to endorse and ambrace them, but they are, by principle, a layer
above. I think.

On Thu, Jul 17, 2014 at 11:46 AM, Clayton Coleman notifications@github.com
wrote:

In Kubernetes, the reference from a container manifest to an image is a
"name" - that name is arbitrary and it is up to the user to specific how
that name interacts with their docker build and docker registry scenarios.
That includes ensuring that the name and label the user uses to refer to
their image is not changed accidentally (so that new images aren't
introduced outside of a controlled deployment process) and that the
registry DNS that hosts the images is continuously available as long as
that image may be needed (see docker image discussions for how this might
change moby/moby#6805).

That loose coupling is valuable for flexibility, but the lack of a
concrete process leaves room for error and requires thought and control. In
addition, the resolution of those names is tightly bound to the execution
of the container in the Kubelet.

We think there is value in Kubernetes providing a set of higher level
concepts above pods/replication controllers that can be used to create
deployable units of containers. Two concepts we see as valuable are
"builds" and "deployments" - the former can be used to compose new images
(by leveraging the Kubernetes cluster for build slaves with resource
control) and the latter can manage the process of transitioning between one
set of podTemplates to another (and can be triggered by builds).

First, is this something that should be in Kubernetes? Should it be on top
of Kubernetes as a separate server? Or is it something that could be
optionally enabled by those who wish to work on it?

Reply to this email directly or view it on GitHub
#503.

@smarterclayton
Copy link
Contributor Author

@smarterclayton smarterclayton commented Jul 19, 2014

Agreed - I don't think pods or replication controllers know anything about builds or deployments, in fact, the layering is reversed - a type of build should be able to use a run once pod to accomplish its goal, while a type of deployment may depend on a particular sequence of calls to replication controllers.

@ncdc
Copy link
Member

@ncdc ncdc commented Jul 25, 2014

We think a comprehensive platform should include deployment capabilities and a means to build images without requiring external infrastructure. To build images, you need hosting infrastructure. At scale, we'd prefer to use the cluster’s resources where possible, and schedule builds just like any other task (i.e. pod). In order to model this problem, we need a notion of a pod that runs only once. We wanted to get a feel for what this integration might feel like.

To that end, we've been working on a prototype to add the ability to build images in Kubernetes. We feel like there should be something fundamental between a build and a pod, so we’ve also added a simple job framework and a POC implementation of run-once semantics for pods as well.

A job contains a pod template, status, success flag, and a reference to the resulting pod. We expect that there will be different types of jobs in the future - for example, running a process inside an existing container or running a pod with multiple containers that all have to complete. We also expect to add dependency information such as predecessors and successors to jobs.

A new job controller (similar to the replication controller) looks for new jobs in storage and acts on them. It creates a run-once pod from the job's pod template and monitors the pod's status to completion.

The job controller will support different job types through delegation in the future.

A build is a user's request to create a new Docker image from one or more inputs (such as a Dockerfile or Docker context). In our POC we implement Dockerfile builds - we expect to support multiple build types such as STI (source to images), packer, Dockerfile2, etc. We are especially interested in feedback about how this problem should be modeled to facilitate other build extensions.

A new build controller (similar to the replication controller) looks for new builds in storage and acts on them. It creates a job for the build, executes the job, and monitors its status to completion (success or failure).

The build controller can support different build implementations, with the initial prototype defining a container that runs its own Docker daemon (Docker-in-Docker) and then executes docker build using the Docker context specified as a parameter to the Kubernetes build.

Implementation Notes:

We had to prototype/provide a couple of new capabilities to implement this proof of concept:

  • Run-once containers
  • Launching privileged containers (for Docker-in-Docker)

Link to our prototype: https://github.com/ironcladlou/kubernetes/tree/build-poc
@ironcladlou, @pmorie

We'll have a screencast demonstrating our prototype shortly! We appreciate all feedback - thanks!

@smarterclayton
Copy link
Contributor Author

@smarterclayton smarterclayton commented Jul 25, 2014

On a Venn diagram a Job and a ReplicationController definitely overlap - to me, I felt like there was value in a Job object which could be driven by an external state machine and the job status to be used as the state register (with the special states NOTSTARTED, RUNNING, and COMPLETE). I'd be interested in how others would model a consumable state machine on top of pod execution for reuse or whether you would instead implement independent resources dependent only on pods. I pushed this towards separating Job from Build, but I could equally see it without that shared concept.

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Jul 25, 2014

I definitely think this is a topic worth discussing, and we could host solutions on the kubernetes repo, either in the main tree or subrepos or something, even if the APIs don't necessarily land in the main apiserver.

We definitely need to do something (or multiple somethings) to make deployment simpler. Some deployment concepts mechanisms have been discussed, such as declarative configuration (#113), pod templates and configuration generation (#170), and rolling updates (several issues).

So, we'd love to hear your ideas.

@smarterclayton
Copy link
Contributor Author

@smarterclayton smarterclayton commented Jul 26, 2014

@ncdc let's move build to a WIP pull so we can have a focused discussion on that.

For deployment I'll create a separate issue on Monday for the various features an admin focused or developer focused deployment service might desire. I've created #635 to talk about api policy.

@shykes
Copy link

@shykes shykes commented Jul 28, 2014

Whatever comes out of this we will consider it for merge in upstream Docker.

@ncdc
Copy link
Member

@ncdc ncdc commented Jul 28, 2014

@shykes which part(s) specifically?

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Dec 3, 2014

We need a deployment solution and should document the recommended approach(es).

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Feb 28, 2015

Not urgent, but how hard would it be to package most of this functionality as independent plugins (once we have a plugin mechanism)?

@smarterclayton
Copy link
Contributor Author

@smarterclayton smarterclayton commented Feb 28, 2015

Very little. Builds are decoupled except for the ability to watch for upstream images that change, and the same applies to deployments.

On Feb 27, 2015, at 9:53 PM, Brian Grant notifications@github.com wrote:

Not urgent, but how hard would it be to package most of this functionality as independent plugins (once we have a plugin mechanism)?


Reply to this email directly or view it on GitHub.

@davidopp
Copy link
Member

@davidopp davidopp commented Apr 10, 2015

@goltermann goltermann added this to the v1.0-candidate milestone Jun 24, 2015
@bgrant0607 bgrant0607 removed this from the v1.0-candidate milestone Jun 25, 2015
@ghodss
Copy link
Contributor

@ghodss ghodss commented Sep 1, 2015

Is this now a dupe to #1743? Or do we still want to keep this open for the idea of builds? In which case it may help to close and fork given this issue as-is covers a lot.

@ghodss
Copy link
Contributor

@ghodss ghodss commented Sep 1, 2015

The issue for a job controller is #1624, with the proposal for it at #11746.

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Jan 7, 2016

I'm closing this now. Jobs and Deployments are underway. If builds appear in Kubernetes, it will be as some kind of extension. We might need image metadata at some point, but that's not really discussed here in any detail.

@bgrant0607 bgrant0607 closed this Jan 7, 2016
metadave pushed a commit to metadave/kubernetes that referenced this issue Feb 22, 2017
)

* [incubator/zookeeper] Remove helm.sh/created annotations

* zookeeper: bump to version 0.1.2
b3atlesfan pushed a commit to b3atlesfan/kubernetes that referenced this issue Feb 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
9 participants