Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for improving local cluster experience. #24106

Merged
merged 1 commit into from
Apr 20, 2016
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
190 changes: 190 additions & 0 deletions docs/proposals/local-cluster-ux.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->

<!-- BEGIN STRIP_FOR_RELEASE -->

<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">

<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>

If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.

Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--

<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Kubernetes Local Cluster Experience

This proposal attempts to improve the existing local cluster experience for kubernetes.
The current local cluster experience is sub-par and often not functional.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it has some shortcomings, but I wouldn't say often not functional

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the deployment solutions just don't work. It should work all the
time.
I assume you were only thinking of the docker based solution?

On Mon, Apr 18, 2016 at 12:02 PM, Lucas Käldström notifications@github.com
wrote:

In docs/proposals/local-cluster-ux.md
#24106 (comment)
:

+If you are using a released version of Kubernetes, you should
+refer to the docs that go with that version.
+
+Documentation for other releases can be found at
+releases.k8s.io.
+
+--
+
+
+
+
+
+# Kubernetes Local Cluster Experience
+
+This proposal attempts to improve the existing local cluster experience for kubernetes.
+The current local cluster experience is sub-par and often not functional.

Yes, it has some shortcomings, but I wouldn't say often not functional


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/24106/files/4d4288e76854d11c1f2db2ffd002a3ed041995fe#r60115283

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I thought about the docker based solution.
Didn't think you meant others as well (haven't tested those, but I have a feeling that they're not that up-to-date).
Could you anyway update the description to reflect that? (Can you make it more clear?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some points. I don't want to get into specific issues on each option here.

There are several options to setup a local cluster (docker, vagrant, linux processes, etc) and we do not test any of them continuously.
Here are some highlighted issues:
- Docker based solution breaks with docker upgrades, does not support DNS, and many kubelet features are not functional yet inside a container.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe The docker based solution may break with docker upgrades?

And it does support DNS, although it isn't automatically deployed

- Vagrant based solution are too heavy and have mostly failed on OS X.
- Local linux cluster is poorly documented and is undiscoverable.
From an end user perspective, they want to run a kubernetes cluster. They care less about *how* a cluster is setup locally and more about what they can do with a functional cluster.


## Primary Goals

From a high level the goal is to make it easy for a new user to run a Kubernetes cluster and play with curated examples that require least amount of knowledge about Kubernetes.
These examples will only use kubectl and only a subset of Kubernetes features that are available will be exposed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which features won't be exposed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few that I can think off the top of my head are,

  • L3 & L7 loadbalancing
  • Most of the volume plugins
  • Horizontal Pod autoscaling - this needs heapster addon

I'm sure we will find more along the way.

On Mon, Apr 18, 2016 at 12:11 PM, Lucas Käldström notifications@github.com
wrote:

In docs/proposals/local-cluster-ux.md
#24106 (comment)
:

+--
+
+
+
+
+
+# Kubernetes Local Cluster Experience
+
+This proposal attempts to improve the existing local cluster experience for kubernetes.
+The current local cluster experience is sub-par and often not functional.
+This proposal is a RFC!
+
+## Primary Goals
+
+From a high level the goal is to make it easy for a new user to run a Kubernetes cluster and play with curated examples that require least amount of knowledge about Kubernetes.
+These examples will only use kubectl and only a subset of Kubernetes features that are available will be exposed.

Which features won't be exposed?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/24106/files/4d4288e76854d11c1f2db2ffd002a3ed041995fe#r60116777


- Works across multiple OSes - OS X, Linux and Windows primarily.
Copy link
Member

@luxas luxas Apr 17, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The server platform may be one of amd64, arm, arm64 or ppc64le
clients are available for windows and darwin

Please point that out

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And maybe point out that on osx and windows, it will use a linux vm as the docker host

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Linux VM is an implementation detail. If we can get kubelet to run natively on BSD, why do we need a VM?

Multi-arch support needs more thought. The purpose of this proposal is to improve the kick the tires experience. How many people want to develop on arm? Given that we will have a separate means to deploy multi-node kubernetes clusters, arm support can be dealt with there, at least to begin with.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

localkube should definitely be built for the other arches, and so should minikube too
minikube should only support the docker (or host) solution for those arches.

- Single command setup and teardown UX.
- Unified UX across OSes
- Minimal dependencies on third party software.
- Minimal resource overhead.
- Eliminate any other alternatives to local cluster deployment.

## Secondary Goals

- Enable developers to use the local cluster for kubernetes development.

## Non Goals

- Simplifying kubernetes production deployment experience. [Kube-deploy](https://github.com/kubernetes/kube-deploy) is attempting to tackle this problem.
- Supporting all possible deployment configurations of Kubernetes like various types of storage, networking, etc.


## Local cluster requirements

- Includes all the master components & DNS (Apiserver, scheduler, controller manager, etcd and kube dns)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should dashboard be included? How should we start the addons?
I've thought if about something like @mikedanese's addon-manager in a pod would fit this setup

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dashboard maybe. We really want the local cluster to be a bare bones kubernetes cluster that will let users play with and develop kubernetes applications.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dashboard would be essential and nice to have. Please add it as a goal

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all add ons should be there. Dashboard, DNS, Helm, Monitoring etc...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Local clusters are resource constrained. Let's not add anything that is not absolutely necessary for developing apps against Kubernetes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dashboard and DNS are working cross-platform (amd64, arm, arm64, ppc64le) after my PRs, so I think we should focus on them. I see them as essential.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could sync up with @ArtfulCoder, who is working on a all-in-one DNS package

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

localkube already includes DNS. We have to ensure that it is configured appropriately.

- Basic auth
- Service accounts should be setup
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe ServiceAccounts should be created automatically or ServiceAccounts should be working?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without service accounts, DNS won't work, nor will Helm. So yes definitely +1, service accounts should definitely work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creation + configuration

- Kubectl should be auto-configured to use the local cluster
- Tested & maintained as part of Kubernetes core

## Existing solutions

Following are some of the existing solutions that attempt to simplify local cluster deployments.

### [Spread](https://github.com/redspread/spread)

Spread's UX is great!
It is adapted from monokube and includes DNS as well.
It satisfies almost all the requirements, excepting that of requiring docker to be pre-installed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And why not improve it to not depend on docker if it's that important to you?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Improving the existing spread model is what this proposal is suggesting. Is your question more about why the improvements are happening in kube repo instead of in spread's repo?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Tue, Apr 12, 2016 at 11:00:11AM -0700, Vish Kannan wrote:

+- Includes all the master components & DNS (Apiserver, scheduler, controller manager, etcd and kube dns)
+- Basic auth
+- Service accounts should be setup
+- Kubectl should be auto-configured to use the local cluster
+- Tested & maintained as part of Kubernetes core
+
+## Existing solutions
+
+Following are some of the existing solutions that attempt to simplify local cluster deployments.
+
+### Spread
+
+Spread's UX is great!
+It is adapted from monokube and includes DNS as well.
+It satisfies almost all the requirements, excepting that of requiring docker to be pre-installed.

Improving the existing spread model is what this proposal is suggesting. Is your question more about why the improvements are happening in kube repo instead of in spread's repo?

Yes. But maybe it's worth to make it a 1st class citizen in kubernetes repo...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. The software that is built around this proposal will be a first class citizen in kubernetes and will be maintained by the Kubernetes team. Since localkube is part of the proposed solution, I think it will become a first class citizen as well.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a good, networked local environment to work in is really important to us - it was one of our biggest pain points developing with Kubernetes. Happy to help collaborate on this!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! :-D

It has a loose dependency on docker.
New releases of docker might break this setup.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this solution with a locked down version of docker be enough to satisfy the reqs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be pretty close.

We'll probably want to wrap up the VM installation/configuration as well though, and localkube /spread are basically bring your own docker (through machine or any other way to run docker).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see advantages to locking down the docker version vs BOYD (bring your own docker).

It's a tradeoff between ease of use vs. depth of configuration. Ease of use get's people who are new to the tech stack, but depth of configuration get's you more flexibility to match your production environment.

Ideally, the docker version would be the same in both production and dev. But I think it's reasonable to assume things are going to be out of sync between production and local environments. Potentially more often with a packaged version of docker.

It seems useful to give the developer at least some control over the docker version.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hharnisc

It seems useful to give the developer at least some control over the docker version.

This is a fair want. As a developer myself, being able to test against new versions of docker is helpful.

On the other hand, requiring the kubelet to run natively inside a docker container and support all its features is proving to be hard. It is the path that we have taken now and it is not easy. On top of that, docker now can run natively on OS X and Windows and this makes the UX much harder.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We considered including VM installation, but we figured there were already good options for setting up a VM (i.e. docker-machine) or running Docker that people were used to. We also wanted to keep localkube pretty unix-like and not add too much additional functionality, and "bring your own Docker" makes it more flexible.


### [Kmachine](https://github.com/skippbox/kmachine)

Kmachine is adapted from docker-machine.
It exposes the entire docker-machine CLI.
It is possible to repurpose Kmachine to meet all our requirements.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for mentioning kmachine.
Yes it is a fork of docker-machine as such it uses lib machine.
Currently it supports k8s 1.2.0 and you can also specify new release version at runtime.
We extensively tested service accounts as well as the add-ons.

One of the benefit is the availability of the cloud provider drivers so that you can start your nanokube remotely.

And it is a single go binary.

Happy to discuss the 'repurpose' you mention.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Runseb Just to limit scope, we want to focus only on the local cluster experience. Is running a single node cluster on a cloud provider a common use case for users?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I very often have multiple 'docker-machine' running in different clouds. One bonus of doing it is that the bandwidth to the cloud provider is usually much higher than on your local setup, so images download much faster.

So speaking for myself, I tend to develop on a droplet in digitalocean rather than on a local virtual box.

So say, you run localkube in a VM on vbox (if I understand this proposal correctly), then I bet you that someone is going to ask to run this on cloud providers, and you will go down the route of docker-machine like drivers.

Also when doing something purely local you tend to make assumptions that are not valid remotely, and when you move to a cloud your stuff breaks.

That's why we used docker-machine to build kmachine.

Internally we just modified docker-machine to deploy k8s as a single node docker deployment. We jumped some hoops to deal with the kubelet running in a container and ended up running it as a regular service. But basically what you get is:

  • single binary
  • single node k8s
  • on the cloud that you want.
  • and a docker host (which people still need to build their images for their apps).

I will also note that kubectl is made to handle multiple remote endpoints and that in the production case your cluster will be remote, so I believe it is better to build a single node experience that still mimics remote k8s access.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Wed, Apr 13, 2016 at 01:01:12AM -0700, runseb wrote:

+Following are some of the existing solutions that attempt to simplify local cluster deployments.
+
+### Kmachine
+
+Kmachine is adapted from docker-machine.
+It exposes the entire docker-machine CLI.
+It is possible to repurpose Kmachine to meet all our requirements.
+

I very often have multiple 'docker-machine' running in different clouds. One bonus of doing it is that the bandwidth to the cloud provider is usually much higher than on your local setup, so images download much faster.

So speaking for myself, I tend to develop on a droplet in digitalocean rather than on a local virtual box.

So say, you run localkube in a VM on vbox (if I understand this proposal correctly), then I bet you that someone is going to ask to run this on cloud providers, and you will go down the route of docker-machine like drivers.

I'm not sure this will happen, as this is a "toy" example and there are ways to
run on cloud providers and are quite easy. You can "play" locally and when you
want to try more things, you deploy to a cloud provider and test what you want
(load balancers, PV, etc.).

It's not clear to me that this is/will be needed.

Also when doing something purely local you tend to make assumptions that are not valid remotely, and when you move to a cloud your stuff breaks.

Like which ones? It didn't happen to me. YAML files that are valid in a local
kubernetes, are valid on a remote one. Just that locally you have less features
(like PVs or LB), but that is just the reason I don't expect this to happen.

I will also note that kubectl is made to handle multiple remote endpoints and that in the production case your cluster will be remote, so I believe it is better to build a single node experience that still mimics remote k8s access.

The toolbox proposes to use kubectl, I understand.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We chose to scope localkube to a minimal setup because there are other options to set up a Docker environment, and we wanted to maintain an abstraction that allows us to work within any Docker environment. That way we wouldn't limit users when they wanted to experiment with Docker versions, Kubernetes versions, different hypervisors, etc - we could provide a good UX layer on top to make things easier without losing flexibility.

An approach that would address this is Git's analogy of porcelain and plumbing. Seems like for local dev for k8s, all the tooling is there, just needs to be put together in a Unix-like way that relies on existing underlying infrastructure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mfburnett

Relying on Docker helps solve the cross OS and distro deployment issues. That requires full compatibility with many docker versions though, and we have has lots of difficulty with compatibility in the past.
Getting most of the basic functionality working in a docker container is also proving to be challenging.

As a secondary goal, I want to enable kubernetes developers to use this setup for testing Kubernetes itself. So going with a VM will let us run other runtimes like rkt or hyper in the future, let kube developers program using OS X and Windows. Imagine building localkube only for kube changes and being able to test it everywhere.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rata just saw your comments, sorry for the delay.

Even though it is a toy model, I very often spawn docker-machines on cloud providers. For the simple fact that it is faster (to start and download images).

As for validity of things locally. I was not referring to the yaml specs and such, but to networking environment, security settings etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Sat, Apr 16, 2016 at 12:57:19AM -0700, runseb wrote:

@rata just saw your comments, sorry for the delay.

Even though it is a toy model, I very often spawn docker-machines on cloud
providers. For the simple fact that it is faster (to start and download
images).

Well, it might be faster or not. But there are very simple ways to run on a
cloud provider. I don't see why running on a cloud provider should be a goal of
"minikube".

As for validity of things locally. I was not referring to the yaml specs and
such, but to networking environment, security settings etc.

I don't see what in a one node cluster can break ("Also when doing something
purely local you tend to make assumptions that are not valid remotely, and when
you move to a cloud your stuff breaks.") when moving to the cloud.

Containers inside a pod communicate via localhost just fine (locally and in the
cloud), service name will work locally just fine (so will in the cloud),
networking when using a cloud provider "just works", etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excepting how services are exposed and availability of certain volume types, we should limit the differences between a local cluster and a remote cluster from a kubernetes API perspective.

### [Monokube](https://github.com/polvi/monokube)

Single binary that runs all kube master components.
Does not include DNS.
This is only a part of the overall local cluster solution.

### Vagrant

The kube-up.sh script included in Kubernetes release supports a few Vagrant based local cluster deployments.
kube-up.sh is not user friendly.
It typically takes a long time for the cluster to be set up using vagrant and often times is unsuccessful on OS X.
The [Core OS single machine guide](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html) uses Vagrant as well and it just works.
Since we are targeting a single command install/teardown experience, vagrant needs to be an implementation detail and not be exposed to our users.

## Proposed Solution

To avoid exposing users to third party software and external dependencies, we will build a toolbox that will be shipped with all the dependencies including all kubernetes components, hypervisor, base image, kubectl, etc.
*Note: Docker provides a [similar toolbox](https://www.docker.com/products/docker-toolbox).*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want to sound self-serving here but definitely a kubernetes toolbox is needed.

While there are ways to run k8s with a single binary now, as well as run Docker on OSX natively, I think that a toolbox can provide a single user experience by packaging whatever is needed.

So +1

At skippbox we started with this idea and got inspired by the docker toolbox, so we created

We had an early prototype of packaging everything in a toolbox including kubectl.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's awesome. cc @dlorenc

This "Localkube" tool will be referred to as "Minikube" in this proposal to avoid ambiguity against Spread's existing ["localkube"](https://github.com/redspread/localkube).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What should we call the version of localkube that is going to be contributed to core kubernetes?
@mfburnett @vishh

The final name of this tool is TBD. Suggestions are welcome!

Minikube will provide a unified CLI to interact with the local cluster.
The CLI will support only a few operations:
- **Start** - creates & starts a local cluster along with setting up kubectl & networking (if necessary)
- **Stop** - suspends the local cluster & preserves cluster state
- **Delete** - deletes the local cluster completely
- **Upgrade** - upgrades internal components to the latest available version (upgrades are not guaranteed to preserve cluster state)

For running and managing the kubernetes components themselves, we can re-use [Spread's localkube](https://github.com/redspread/localkube).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

localkube will donate the most of their code to core Kubernetes and we'll use that

Localkube is a self-contained go binary that includes all the master components including DNS and runs them using multiple go threads.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not hyperkube that is already on kubernetes repo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • DNS works off the box with localkube
  • We can run localkube on the VM outside of a container, which obviates the need for getting kubelet to run inside a container.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hyperkube only runs one component at a time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even a better reason :)

Each Kubernetes release will include a localkube binary that has been tested exhaustively.

To support Windows and OS X, minikube will use [libmachine](https://github.com/docker/machine/tree/master/libmachine) internally to create and destroy virtual machines.
Minikube will be shipped with an hypervisor (virtualbox) in the case of OS X.
Minikube will include a base image that will be well tested.

In the case of Linux, since the cluster can be run locally, we ideally want to avoid setting up a VM.
Since docker is the only fully supported runtime as of Kubernetes v1.2, we can initially use docker to run and manage localkube.
There is risk of being incompatible with the existing version of docker.
By using a VM, we can avoid such incompatibility issues though.
Feedback from the community will be helpful here.

If the goal is to run outside of a VM, we can have minikube prompt the user if docker is unavailable or version is incompatible.
Alternatives to docker for running the localkube core includes using [rkt](https://coreos.com/rkt/docs/latest/), setting up systemd services, or a System V Init script depending on the distro.

To summarize the pipeline is as follows:

##### OS X / Windows

minikube -> libmachine -> virtualbox/hyper V -> linux VM -> localkube
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: virtualbox/hyper-v


##### Linux

minikube -> docker -> localkube

### Alternatives considered

#### Bring your own docker

##### Pros

- Kubernetes users will probably already have it
- No extra work for us
- Only one VM/daemon, we can just reuse the existing one

##### Cons

- Not designed to be wrapped, may be unstable
- Might make configuring networking difficult on OS X and Windows
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably there are storage/volume options (e.g., NFS) that might not work when using this approach, assuming it is not run as root.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Yes. Not all volumes will be supported. There is also docker running natively on OS X and Windows, which will introduce more complexity.

- Versioning and updates will be challenging. We can mitigate some of this with testing at HEAD, but we'll - inevitably hit situations where it's infeasible to work with multiple versions of docker.
- There are lots of different ways to install docker, networking might be challenging if we try to support many paths.

#### Vagrant

##### Pros

- We control the entire experience
- Networking might be easier to build
- Docker can't break us since we'll include a pinned version of Docker
- Easier to support rkt or hyper in the future
- Would let us run some things outside of containers (kubelet, maybe ingress/load balancers)

##### Cons

- More work
- Extra resources (if the user is also running docker-machine)
- Confusing if there are two docker daemons (images built in one can't be run in another)
- Always needs a VM, even on Linux
- Requires installing and possibly understanding Vagrant.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only for amd64 I assume

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really? Doesn't virtualbox run on arm platforms with arm binaries?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, virtualbox doesn't run on ARM (https://forums.virtualbox.org/viewtopic.php?f=9&t=65426)
If it would, nobody would use it anyway, 'cause it would be so slow.

## Releases & Distribution

- Minikube will be released independent of Kubernetes core in order to facilitate fixing of issues that are outside of Kubernetes core.
- The latest version of Minikube is guaranteed to support the latest release of Kubernetes, including documentation.
- The Google Cloud SDK will package minikube and provide utilities for configuring kubectl to use it, but will not in any other way wrap minikube.



<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/local-cluster-ux.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->