Skip to content
Browse files

Copy the first couple of documentation pages over to Hugo.

  • Loading branch information
tstromberg committed Aug 1, 2019
1 parent 6dfd743 commit abe2411c1cd0547dfcb0d46b64124153c89d4c58
Showing with 1,006 additions and 1,804 deletions.
  1. +1 −1 site/content/en/docs/Concepts/
  2. +30 −0 site/content/en/docs/Concepts/
  3. +11 −0 site/content/en/docs/Contributing/
  4. +64 −0 site/content/en/docs/Contributing/
  5. +130 −0 site/content/en/docs/Contributing/
  6. +108 −0 site/content/en/docs/Contributing/
  7. +115 −0 site/content/en/docs/Contributing/
  8. +56 −0 site/content/en/docs/Contributing/
  9. +59 −0 site/content/en/docs/Contributing/
  10. +0 −81 site/content/en/docs/Contribution guidelines/
  11. +29 −5 site/content/en/docs/Examples/
  12. +75 −13 site/content/en/docs/Getting started/
  13. +0 −239 site/content/en/docs/Getting started/
  14. +29 −17 site/content/en/docs/Overview/
  15. +13 −0 site/content/en/docs/Reference/Commands/
  16. +8 −0 site/content/en/docs/Reference/Drivers/
  17. +10 −0 site/content/en/docs/Reference/Networking/
  18. +40 −0 site/content/en/docs/Reference/Networking/
  19. +1 −1 site/content/en/docs/Reference/
  20. +0 −212 site/content/en/docs/Reference/
  21. +0 −16 site/content/en/docs/Tasks/Ponycopters/
  22. +0 −239 site/content/en/docs/Tasks/Ponycopters/
  23. +0 −239 site/content/en/docs/Tasks/Ponycopters/
  24. +1 −3 site/content/en/docs/Tasks/
  25. +72 −0 site/content/en/docs/Tasks/
  26. +0 −239 site/content/en/docs/Tasks/
  27. +59 −0 site/content/en/docs/Tasks/
  28. +0 −239 site/content/en/docs/Tasks/
  29. +0 −239 site/content/en/docs/Tasks/
  30. +2 −7 site/content/en/docs/Tutorials/
  31. +1 −14 site/content/en/docs/
  32. +1 −0 site/layouts/partials/hooks/body-end.html
  33. +3 −0 site/layouts/partials/hooks/head-end.html
  34. +3 −0 site/layouts/shortcodes/tab.html
  35. +4 −0 site/layouts/shortcodes/tabs.html
  36. +54 −0 site/static/css/tabs.css
  37. +27 −0 site/static/js/tabs.js
@@ -3,7 +3,7 @@ title: "Concepts"
linkTitle: "Concepts"
weight: 4
description: >
What does your user need to understand about your project in order to use it - or potentially contribute to it?
Concepts that users and contributors should be aware of.

{{% pageinfo %}}
@@ -0,0 +1,30 @@
title: "Principles"
date: 2019-06-18T15:31:58+08:00

# Principles of Minikube

The primary goal of minikube is to make it simple to run Kubernetes locally, for day-to-day development workflows and learning purposes. Here are the guiding principles for minikube, in rough priority order:

1. User-friendly and accessible
2. Inclusive and community-driven
3. Cross-platform
4. Support all Kubernetes features
5. High-fidelity
6. Compatible with all supported Kubernetes releases
7. Support for all Kubernetes-friendly container runtimes
8. Stable and easy to debug

Here are some specific minikube features that align with our goal:

* Single command setup and teardown UX
* Support for local storage, networking, auto-scaling, load balancing, etc.
* Unified UX across operating systems
* Minimal dependencies on third party software
* Minimal resource overhead

## Non-Goals

* Simplifying Kubernetes production deployment experience
* Supporting all possible deployment configurations of Kubernetes like various types of storage, networking, etc.
@@ -0,0 +1,11 @@
title: "Contributing"
linkTitle: "Contributing"
weight: 10
description: >
How to contribute to minikube

{{% pageinfo %}}
This page is under heavy construction
{{% /pageinfo %}}
@@ -0,0 +1,64 @@
title: "Addons"
date: 2019-07-31
weight: 4
description: >
How to develop minikube addons

## Adding a New Addon

To add a new addon to minikube the following steps are required:

* For the new addon's .yaml file(s):
* Put the required .yaml files for the addon in the `minikube/deploy/addons` directory.
* Add the ` <NEW_ADDON_NAME>` label to each piece of the addon (ReplicationController, Service, etc.)
* Also, `` annotation is needed so that your resources are picked up by the `addon-manager` minikube addon.
* In order to have `minikube addons open <NEW_ADDON_NAME>` work properly, the ` <NEW_ADDON_NAME>` label must be added to the appropriate endpoint service (what the user would want to open/interact with). This service must be of type NodePort.

* To add the addon into minikube commands/VM:
* Add the addon with appropriate fields filled into the `Addon` dictionary, see this [commit]( and example.

// cmd/minikube/cmd/config/config.go
var settings = []Setting{
// add other addon setting
name: "efk",
set: SetBool,
validations: []setFn{IsValidAddon},
callbacks: []setFn{EnableOrDisableAddon},

* Add the addon to settings list, see this [commit]( and example.

// pkg/minikube/assets/addons.go
var Addons = map[string]*Addon{
// add other addon asset
"efk": NewAddon([]*BinAsset{
}, false, "efk"),

* Rebuild minikube using make out/minikube. This will put the addon's .yaml binary files into the minikube binary using go-bindata.
* Test addon using `minikube addons enable <NEW_ADDON_NAME>` command to start service.
@@ -0,0 +1,130 @@
title: "Building minikube"
date: 2019-07-31
weight: 4
description: >
Building minikube

This guide covers both building the minikube binary and the ISO.

## Prerequisites

* A recent Go distribution (>=1.12)
* If you are on Windows, you'll need Docker to be installed.
* 4GB of RAM

Additionally, if you are on Fedora, you will need to install `glibc-static`:

sudo dnf install -y glibc-static

## Downloading the source

git clone
cd minikube

## Building the binary


Alternatively, you may cross-compile to/from different operating systems:


The resulting binaries for each platform will be located in the `out/` subdirectory.

## Using a source-built minikube binary

Start the cluster using your built minikube with:

./out/minikube start

## Building the ISO

The minikube ISO is booted by each hypervisor to provide a stable minimal Linux environment to start Kubernetes from. It is based on coreboot, uses systemd, and includes all necessary container runtimes and hypervisor guest drivers.

### Prerequisites

See the above requirements for building the minikube binary. Additionally, you will need:

sudo apt-get install build-essential gnupg2 p7zip-full git wget cpio python \
unzip bc gcc-multilib automake libtool locales
### Build instructions

$ make buildroot-image
$ make out/minikube.iso

The build will occur inside a docker container. If you want to do this on
baremetal, replace `make out/minikube.iso` with `IN_DOCKER=1 make out/minikube.iso`.
The bootable ISO image will be available in `out/minikube.iso`.

### Using a local ISO image

$ ./out/minikube start --iso-url=file://$(pwd)/out/minikube.iso

### Modifying buildroot components

To change which Linux userland components are included by the guest VM, use this to modify the buildroot configuration:

cd out/buildroot
make menuconfig

To save these configuration changes, execute:

make savedefconfig

The changes will be reflected in the `minikube-iso/configs/minikube_defconfig` file.

### Adding kernel modules

To make kernel configuration changes and save them, execute:

$ make linux-menuconfig

This will open the kernel configuration menu, and then save your changes to our
iso directory after they've been selected.

### Adding third-party packages

To add your own package to the minikube ISO, create a package directory under `iso/minikube-iso/package`. This directory will require at least 3 files:

`<package name>.mk` - A Makefile describing how to download the source code and build the program
`<package name>.hash` - Checksums to verify the downloaded source code
`` - buildroot configuration.

For a relatively simple example to start with, you may want to reference the `podman` package.

## Continuous Integration Builds

We publish CI builds of minikube, built at every Pull Request. Builds are available at (substitute in the relevant PR number):

- <>
- <>
- <>

We also publish CI builds of minikube-iso, built at every Pull Request that touches deploy/iso/minikube-iso. Builds are available at:

- <>
@@ -0,0 +1,108 @@
title: "Drivers"
date: 2019-07-31
weight: 4
description: >
How to create a new VM Driver

This document is written for contributors who are familiar with minikube, who would like to add support for a new VM driver.

minikube relies on docker-machine drivers to manage machines. This document discusses how to modify minikube, so that this driver may be used by `minikube create --vm-driver=<new_driver>`.

## Creating a new driver

See, the fork where all new docker-machine drivers are located.

## Builtin vs External Drivers

Most drivers are built-in: they are included into minikube as a code dependency, so no further
installation is required. There are two primary cases you may want to use an external driver:

- The driver has a code dependency which minikube should not rely on due to platform incompatibilities (kvm2) or licensing
- The driver needs to run with elevated permissions (hyperkit)

External drivers are instantiated by executing a command `docker-machine-driver-<name>`, which begins an RPC server which minikube will talk to.

### Integrating a driver

The integration process is effectively 3 steps.

1. Create a driver shim within ``
- Add Go build tag for the supported operating systems
- Define the driver metadata to register in `DriverDef`
2. Add import in `pkg/minikube/cluster/default_drivers.go` so that the driver may be included by the minikube build process.

### The driver shim

The primary duty of the driver shim is to register a VM driver with minikube, and translate minikube VM hardware configuration into a format that the driver understands.

### Registering your driver

The godoc of registry is available here: <>

[DriverDef]( is the main
struct to define a driver metadata. Essentially, you need to define 4 things at most, which is
pretty simple once you understand your driver well:

- Name: unique name of the driver, it will be used as the unique ID in registry and as
`--vm-driver` option in minikube command

- Builtin: `true` if the driver should be builtin to minikube (preferred). `false` otherwise.

- ConfigCreator: how to translate a minikube config to driver config. The driver config will be persistent
on your `$USER/.minikube` directory. Most likely the driver config is the driver itself.

- DriverCreator: Only needed when driver is builtin, to instantiate the driver instance.

## Integration example: vmwarefusion

All drivers are located in ``. Take `vmwarefusion` as an example:

// +build darwin
package vmwarefusion
import (
cfg ""
func init() {
Name: "vmwarefusion",
Builtin: true,
ConfigCreator: createVMwareFusionHost,
DriverCreator: func() drivers.Driver {
return vmwarefusion.NewDriver("", "")
func createVMwareFusionHost(config cfg.MachineConfig) interface{} {
d := vmwarefusion.NewDriver(cfg.GetMachineName(), constants.GetMinipath()).(*vmwarefusion.Driver)
d.Boot2DockerURL = config.Downloader.GetISOFileURI(config.MinikubeISO)
d.Memory = config.Memory
d.CPU = config.CPUs
d.DiskSize = config.DiskSize
d.SSHPort = 22
d.ISO = d.ResolveStorePath("boot2docker.iso")
return d

- In init function, register a `DriverDef` in registry. Specify the metadata in the `DriverDef`. As mentioned
earlier, it's builtin, so you also need to specify `DriverCreator` to tell minikube how to create a `drivers.Driver`.
- Another important thing is `vmwarefusion` only runs on MacOS. You need to add a build tag on top so it only
runs on MacOS, so that the releases on Windows and Linux won't have this driver in registry.
- Last but not least, import the driver in `pkg/minikube/cluster/default_drivers.go` to include it in build.

Any Questions: please ping your friend [@anfernee]( or the #minikube Slack channel.

0 comments on commit abe2411

Please sign in to comment.
You can’t perform that action at this time.