Skip to content

Commit

Permalink
Update README.md and add quickstart documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
prksu committed Jan 27, 2020
1 parent 76d1232 commit bd7de62
Show file tree
Hide file tree
Showing 2 changed files with 174 additions and 208 deletions.
256 changes: 48 additions & 208 deletions README.md
@@ -1,221 +1,61 @@
# Kubernetes cluster-api-provider-digitalocean Project
[![sig-cluster-lifecycle-cluster-api-provider-digitalocean/build](https://testgrid.k8s.io/q/summary/sig-cluster-lifecycle-cluster-api-provider-digitalocean/build/tests_status?style=svg)](https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api-provider-digitalocean#build)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes-sigs/cluster-api-provider-digitalocean)](https://goreportcard.com/report/github.com/kubernetes-sigs/cluster-api-provider-digitalocean)
# Kubernetes Cluster API Provider DigitalOcean

This repository hosts a concrete implementation of a provider for [DigitalOcean](https://www.digitalocean.com/) for the [cluster-api project](https://github.com/kubernetes-sigs/cluster-api).
<p align="center"><img alt="capi" src="https://github.com/kubernetes-sigs/cluster-api/raw/master/docs/book/src/images/introduction.png" width="160x" /><img alt="capi" src="https://upload.wikimedia.org/wikipedia/commons/f/ff/DigitalOcean_logo.svg" width="192x" /></p>
<p align="center">
<!-- prow build badge, godoc, and go report card-->
</a> <a href="https://godoc.org/sigs.k8s.io/cluster-api-provider-digitalocean"><img src="https://godoc.org/sigs.k8s.io/cluster-api-provider-digitalocean?status.svg"></a> <a href="https://goreportcard.com/report/sigs.k8s.io/cluster-api-provider-digitalocean"><img alt="Go Report Card" src="https://goreportcard.com/badge/sigs.k8s.io/cluster-api-provider-digitalocean" /></a></p>

------

Kubernetes-native declarative infrastructure for DigitalOcean.

## What is the Cluster API Provider DigitalOcean

The [Cluster API][cluster_api] brings
declarative, Kubernetes-style APIs to cluster creation, configuration and
management.

The API itself is shared across multiple cloud providers allowing for true DigitalOcean
hybrid deployments of Kubernetes. It is built atop the lessons learned from
previous cluster managers such as [kops][kops] and
[kubicorn][kubicorn].

## Project Status

This project is currently work-in-progress and in Alpha, so it may not be production ready. There is no backwards-compatibility guarantee at this point. For more details on the roadmap and upcoming features, check out [the project's issue tracker on GitHub](https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/issues).
This project is currently work-in-progress and in Alpha, so it may not be production ready. There is no backwards-compatibility guarantee at this point. For more details on the roadmap and upcoming features, check out [the project's issue tracker on GitHub][issue].

## Getting Started
## Launching a Kubernetes cluster on DigitalOcean

### Prerequisites
Check out the [getting started guide](./docs/getting-started.md) for launching a cluster on DigitalOcean.

In order to create a cluster using `clusterctl`, you need the following tools installed on your local machine:
## Features

* `kubectl`, which can be done by following [this tutorial](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [`kustomize`](https://github.com/kubernetes-sigs/kustomize), used to generate manifests needed to deploy a cluster,
* [`minikube`](https://kubernetes.io/docs/tasks/tools/install-minikube/) and the appropriate [`minikube` driver](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md). We recommend `kvm2` driver for Linux and `virtualbox` for macOS.
* [DigitalOcean API Access Token generated](https://www.digitalocean.com/docs/api/create-personal-access-token/) and set as the `DIGITALOCEAN_ACCESS_TOKEN` environment variable,
* Go toolchain [installed and configured](https://golang.org/doc/install), needed in order to compile the `clusterctl` binary,
* `cluster-api-provider-digitalocean` repository cloned:
```bash
git clone https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean $(go env GOPATH)/src/sigs.k8s.io/cluster-api-provider-digitalocean
```
- Native Kubernetes manifests and API
- Support for single and multi-node control plane clusters
- Choice your Linux distribution (as long as a current cloud-init is available)

### Building `clusterctl`
------

The `clusterctl` tool is used to bootstrap an Kubernetes cluster from zero. Currently, we have not released binaries, so you need to compile it manually.
## Compatibility with Cluster API and Kubernetes Versions

Compiling is done by invoking the `compile` Make target:
```bash
make compile
```

This command generates three binaries: `clusterctl`, `machine-controller` and `cluster-controller`, in the `./bin` directory. In order to bootstrap the cluster, you only need the `clusterctl` binary.

The `clusterctl` can also be compiled manually, such as:
```bash
cd $(go env GOPATH)/src/sigs.k8s.io/cluster-api-provider-digitalocean/cmd/clusterctl
go install
```

## Creating a Cluster

To create your first cluster using `cluster-api-provider-digitalocean`, you need to use the `clusterctl`. It takes the following four manifests as input:

* `cluster.yaml` - defines Cluster properties, such as Pod and Services CIDR, Services Domain, etc.
* `machines.yaml` - defines Machine properties, such as machine size, image, tags, SSH keys, enabled features, as well as what Kubernetes version will be used for each machine.
* `provider-components.yaml` - contains deployment manifest for Cluster-API Controller and DigitalOcean Manager binary which manages and reconciles Cluster-API resources related to this provider.
* [Optional] `addons.yaml` - used to deploy additional components once the cluster is bootstrapped, such as [DigitalOcean Cloud Controller Manager](https://github.com/digitalocean/digitalocean-cloud-controller-manager) and [DigitalOcean CSI plugin](https://github.com/digitalocean/csi-digitalocean).

The manifests can be generated automatically by using the [`generate-yaml.sh`](./cmd/clusterctl/examples/digitalocean/generate-yaml.sh) script, located in the `cmd/clusterctl/examples/digitalocean` directory:
```bash
cd cmd/clusterctl/examples/digitalocean
./generate-yaml.sh
```

The result of the script is an `out` directory with generated manifests and a generated SSH key to be used by the `machine-controller`. More details about how it generates manifests and how to customize them can be found in the [README file in `cmd/clusterctl/examples/digitalocean` directory](./cmd/clusterctl/examples/digitalocean).

The `generate-yaml.sh` script takes care of `cluster.yaml`, `machines.yaml` and `addons.yaml` manifests, while the `provider-components.yaml` manifest must be generated using [Kustomize](https://github.com/kubernetes-sigs/kustomize), such as:
```bash
# Return to the project's root directory
cd ../../../..
# Build provider-components manifest for deploying the Manager for the DigitalOcean Provider
kustomize build config/default/ > cmd/clusterctl/examples/digitalocean/out/provider-components.yaml
# Append manifest for deploying Cluster-API Controller to the generated provider-components manifest
echo "---" >> cmd/clusterctl/examples/digitalocean/out/provider-components.yaml
kustomize build vendor/sigs.k8s.io/cluster-api/config/default/ >> cmd/clusterctl/examples/digitalocean/out/provider-components.yaml
```

Once you have manifests generated, you can create a cluster using the following command. Make sure to replace the value of `vm-driver` flag with the name of your actual `minikube` driver.
```bash
./bin/clusterctl create cluster \
--provider digitalocean \
--vm-driver kvm2 \
-c ./cmd/clusterctl/examples/digitalocean/out/cluster.yaml \
-m ./cmd/clusterctl/examples/digitalocean/out/machines.yaml \
-p ./cmd/clusterctl/examples/digitalocean/out/provider-components.yaml \
-a ./cmd/clusterctl/examples/digitalocean/out/addons.yaml
```

More details about the `create cluster` command can be found by invoking help:
```bash
./bin/clusterctl create cluster --help
```

The `clusterctl`'s workflow is:
* Create a Minikube bootstrap cluster,
* Deploy the `cluster-api-controller` and `digitalocean-manager`,
* Create a Master, download `kubeconfig` file, and deploy controllers on the Master,
* Create other specified machines (nodes),
* Deploy addon components ([`digitalocean-cloud-controller-manager`](https://github.com/digitalocean/digitalocean-cloud-controller-manager) and [`csi-digitalocean`](https://github.com/digitalocean/csi-digitalocean)),
* Remove the local Minikube cluster.

To learn more about the process and how each component work, check out the [diagram in `cluster-api` repostiory](https://github.com/kubernetes-sigs/cluster-api#what-is-the-cluster-api).

### Interacting With Your New Cluster

`clusterctl` downloads the `kubeconfig` file in your current directory from the cluster automatically. You can use it with `kubectl` to interact with your cluster:
```bash
kubectl --kubeconfig kubeconfig get nodes
kubectl --kubeconfig kubeconfig get all --all-namespaces
```

## Upgrading the Cluster

Upgrading Master is currently not possible automatically (by updating the Machine object) as Update method is not fully implemented. More details can be found in [issue #32](https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/issues/32).

Workers can be upgraded by updating the appropriate Machine object for that node. Workers are upgraded by replacing nodes—first the old node is removed and then a new one with new properties is created.

To ensure non-disturbing maintenance we recommend having at least 2+ worker nodes at the time of upgrading, so another node can take tasks from the node being upgraded. The node that is going to be upgraded should be marked unschedulable and drained, so there are no pods running and scheduled.

```bash
# Make node unschedulable.
kubectl --kubeconfig kubeconfig cordon <node-name>
# Drain all pods from the node.
kubectl --kubeconfig kubeconfig drain <node-name>
```

Now that you prepared node for upgrading, you can proceed with editing the Machine object:
```bash
kubectl --kubeconfig kubeconfig edit machine <node-name>
```

This opens the Machine manifest such as the following one, in your default text editor. You can choose editor by setting the `EDITOR` environment variable.

There you can change machine properties, including Kubernetes (`kubelet`) version.

```yaml
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
creationTimestamp: 2018-09-14T11:02:16Z
finalizers:
- machine.cluster.k8s.io
generateName: digitalocean-fra1-node-
generation: 3
labels:
set: node
name: digitalocean-fra1-node-tzzgm
namespace: default
resourceVersion: "5"
selfLink: /apis/cluster.k8s.io/v1alpha1/namespaces/default/machines/digitalocean-fra1-node-tzzgm
uid: a41f83ad-b80d-11e8-aeef-0242ac110003
spec:
metadata:
creationTimestamp: null
providerSpec:
ValueFrom: null
value:
backups: false
image: ubuntu-18-04-x64
ipv6: false
monitoring: true
private_networking: true
region: fra1
size: s-2vcpu-2gb
sshPublicKeys:
- ssh-rsa AAAA
tags:
- machine-2
versions:
kubelet: 1.11.3
status:
lastUpdated: null
providerStatus: null
```

Saving changes to the Machine object deletes the old machine and then creates a new one. After some time, a new machine will be part of your Kubernetes cluster. You can track progress by watching list of nodes. Once new node appears and is Ready, upgrade has finished.

```bash
watch -n1 kubectl get nodes
```

## Deleting the Cluster

To delete Master and confirm all relevant resources are deleted from the cloud, we're going to use [`doctl`—DigitalOcean CLI](https://github.com/digitalocean/doctl). You can also use DigitalOcean Cloud Control Panel or API instead of `doctl`.

First, save the Droplet ID of Master, as we'll use it later to delete the control plane machine:

```bash
export MASTER_ID=$(kubectl --kubeconfig=kubeconfig get machines -l set=master -o jsonpath='{.items[0].metadata.annotations.droplet-id}')
```

Now, delete all Workers in the cluster by removing all Machine object with label `set=node`:

```
kubectl --kubeconfig=kubeconfig delete machines -l set=node
```

You can confirm are nodes deleted by checking list of nodes. After some time, only Master should be present:

```bash
kubectl --kubeconfig=kubeconfig get nodes
```

Then, delete all Services and PersistentVolumeClaims, so all Load Balancers and Volumes in the cloud are deleted:

```bash
kubectl --kubeconfig=kubeconfig delete svc --all
kubectl --kubeconfig=kubeconfig delete pvc --all
```

Finally, we can delete the Master using `doctl` and `$MASTER_ID` environment variable we set earlier:

```bash
doctl compute droplet delete $MASTER_ID
```

You can use `doctl` to confirm that Droplets, Load Balancers and Volumes relevant to the cluster are deleted:

```bash
doctl compute droplet list
doctl compute load-balancer list
doctl compute volume list
```

## Development
TODO

## Documentation

Documentation is in the `/docs` directory.

## Getting involved and contributing

More about development and contributing practices can be found in [`CONTRIBUTING.md`](./CONTRIBUTING.md).

<!-- References -->

[prow]: https://go.k8s.io/bot-commands
[issue]: https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/issues
[new_issue]: https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/issues/new
[good_first_issue]: https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22
[cluster_api]: https://github.com/kubernetes-sigs/cluster-api
[kops]: https://github.com/kubernetes/kops
[kubicorn]: http://kubicorn.io/
[tilt]: https://tilt.dev
[cluster_api_tilt]: https://master.cluster-api.sigs.k8s.io/developer/tilt.html
126 changes: 126 additions & 0 deletions docs/getting-started.md
@@ -0,0 +1,126 @@
# Getting started

## Prerequisites

- Linux or MacOS (Windows isn't supported at the moment).
- A [DigitalOcean][DigitalOcean] Account
- Install [kubectl][kubectl]
- Install [kustomize][kustomize] `v3.1.0+`
- [Packer][Packer] and [Ansible][Ansible] to build images
- Make to use `Makefile` targets
- A management cluster. You can use either a VM, container or existing Kubernetes cluster as management cluster.
- If you want to use VM, install [Minikube][Minikube], version 0.30.0 or greater. Also install a [driver][Minikube Driver]. For Linux, we recommend `kvm2`. For MacOS, we recommend `VirtualBox`.
- If you want to use a container, install [Kind][kind].
- If you want to use an existing Kubernetes cluster, prepare a kubeconfig which for this cluster.
- Install [doctl][doctl] (optional)

## Setup Environment

```bash
# Export the DigitalOcean access token and region
export DIGITALOCEAN_ACCESS_TOKEN=<access_token>
export DO_REGION=<region>

# Init doctl
doctl auth init --access-token ${DIGITALOCEAN_ACCESS_TOKEN}
```

## Building images

Clone the image builder repository if you haven't already.

$ git clone https://sigs.k8s.io/image-builder.git image-builder

Change directory to images/capi within the image builder repository

$ cd image-builder/images/capi

Run the Make target to generate DigitalOcean images.

$ make build-do-default

Check the image already available in your account.

$ doctl compute image list-user


## Cluster Creation

> We assume you already have a running a management cluster
```bash
export CLUSTER_NAME=capdo-quickstart # change this name as you prefer.
export MACHINE_IMAGE=<image-id> # created in the step above.
```

For the purpose of this tutorial, we’ll name our cluster `capdo-quickstart`.

Generate examples files.

```
make generate-examples
```

Install core-component(CAPI & CABPK) and provider-component (CAPDO)

```
kubectl apply -f examples/_out/core-components.yaml
kubectl apply -f examples/_out/provider-components.yaml
```

Create cluster and control plane machine

```
kubectl apply -f examples/_out/cluster.yaml
kubectl apply -f examples/_out/controlplane.yaml
```

After the controlplane is up and running, Retrive cluster kubeconfig

```bash
kubectl --namespace=default get secret/${CLUSTER_NAME}-kubeconfig -o json \
| jq -r .data.value \
| base64 --decode \
> ./${CLUSTER_NAME}.kubeconfig
```

Deploy a CNI solution, Calico is used here as an example.

```bash
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
```

Deploy DigitalOcean Cloud Controller Manager

```bash
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f examples/digitalocean-ccm.yaml
```

Optional Deploy DigitalOcean CSI

```bash
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f examples/digitalocean-csi.yaml
```

Check the status of control-plane using kubectl get nodes

```bash
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes
```

Finishing up, we’ll create a single node MachineDeployment.

```
kubectl apply -f examples/_out/machinedeployment.yaml
```

<!-- References -->
[kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
[kustomize]: https://github.com/kubernetes-sigs/kustomize/releases
[kind]: https://github.com/kubernetes-sigs/kind#installation-and-usage
[doctl]: https://github.com/digitalocean/doctl#installing-doctl
[Minikube]: https://kubernetes.io/docs/tasks/tools/install-minikube/
[Minikube Driver]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md
[Packer]: https://www.packer.io/intro/getting-started/install.html
[Ansible]: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html
[DigitalOcean]: https://cloud.digitalocean.com/

0 comments on commit bd7de62

Please sign in to comment.