Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 Updating user docs #18

Merged
merged 1 commit into from
Dec 16, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 14 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# <img alt="capi" src="docs/pics/cluster-api.png" height="48x" /> Kubernetes Cluster API Provider Hetzner

[![GitHub release](https://img.shields.io/github/release/syself/cluster-api-provider-hetzner/all.svg?style=flat-square)](https://github.com/syself/cluster-api-provider-hetzner/releases)
[![GoDoc](https://godoc.org/github.com/syself/cluster-api-provider-hetzner?status.svg)](https://pkg.go.dev/github.com/syself/cluster-api-provider-hetzner?tab=overview)
[![Go Report Card](https://goreportcard.com/badge/github.com/syself/cluster-api-provider-hetzner)](https://goreportcard.com/report/github.com/syself/cluster-api-provider-hetzner)
[![Latest quay.io image tags](https://img.shields.io/github/v/tag/syself/cluster-api-provider-hetzner?include_prereleases&label=quay.io)](https://quay.io/repository/syself/cluster-api-provider-hetzner?tab=tags)

<p align="center">
<img alt="hcloud" src="docs/pics/hetzner.png"/>
Expand All @@ -17,22 +20,22 @@ hybrid deployments of Kubernetes.

> This is no official Hetzner Project! It's maintained by the folks of the cloud-native startup Syself.

## Launching a Kubernetes cluster on Hetzner

Check out the [Quickstart Guide](docs/quickstart.md) to create your first Kubernetes cluster on Hetzner using Cluster API.

## Features

* Native Kubernetes manifests and API
* Choice of Linux distribution (as long as a current cloud-init is available)
* Support for single and multi-node control plane clusters
* Support for HCloud Placement groups
* cloud-init based nodes bootstrapping
* Hetzner Dedicated Server *comming soon*

## Quick Start

Check out the [Cluster API Quick Start][quickstart] to create your first Kubernetes cluster on Hetzner using Cluster API.
Then please check out the [Quickstart Guide](docs/quickstart.md).*cooming soon*
---

------

## Support Policy
## Compatibility with Cluster API and Kubernetes Versions

This provider's versions are compatible with the following versions of Cluster API:

Expand All @@ -59,6 +62,10 @@ Each version of Cluster API for Hetzner will attempt to support at least two Kub

------

## Operating system images
Note: Cluster API Provider Hetzner relies on a few prerequisites which have to be already installed in the used operating system images, e.g. a container runtime, kubelet, kubeadm,.. . Reference images can be found in kubernetes-sigs/image-builder and in templates/node-image If it isn't possible to pre-install those prerequisites in the image, you can always deploy and execute some custom scripts through the KubeadmConfig.

---
## Documentation

Docs can be found in the `/docs` directory. Index could be found [here](docs/README.md).
Expand Down
11 changes: 1 addition & 10 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,8 @@
- [Getting started](topics/quickstart.md )
- [Cluster API quick start](https://cluster-api.sigs.k8s.io/user/quick-start.html)

## Features

- [Topics](topics/summary.md)


## Development

- [Development guide](developers/development.md)
- [Proposals](proposals)
- [Releasing](developers/releasing.md)

## Troubleshooting

- [Troubleshooting guide](topics/troubleshooting.md)
- [Releasing](developers/releasing.md)
20 changes: 14 additions & 6 deletions docs/developers/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,8 @@ After you have cloned both repositories, your folder structure should look like:
```

Now you need to configure the environment variables, alternatively add them to the kustomize_substitutions.


Run the following to generate your `tilt-settings.json` file:

```shell
Expand All @@ -199,7 +201,6 @@ cat <<EOF > tilt-settings.json
"provider_repos": ["../cluster-api-provider-hetzner"],
"enable_providers": ["caph-controller-manager", "kubeadm-bootstrap", "kubeadm-control-plane"],
"kustomize_substitutions": {
"HCLOUD_TOKEN": "<YOUR-TOKEN>",
"SSH_KEY": "test",
"REGION": "fsn1",
"CONTROL_PLANE_MACHINE_COUNT": "3",
Expand All @@ -218,7 +219,12 @@ EOF

The cluster-api management components that are deployed are configured at the `/config` folder of each repository respectively. Making changes to those files will trigger a redeploy of the management cluster components.

#### Deploying a workload cluster
##### Creating the secret for the hetzner provider:

```shell
kubectl create secret generic hetzner --from-literal=hcloud=$HCLOUD_TOKEN
```
### Deploying a workload cluster without Tilt

If you want to deploy a workload cluster with the common way without letting tilt doing this for you.
You need first to set some environment variables.
Expand All @@ -235,17 +241,19 @@ export HCLOUD_CONTROL_PLANE_MACHINE_TYPE=cpx31 \
export HCLOUD_NODE_MACHINE_TYPE=cpx31 \
export CLUSTER_NAME="test"
```
Creating the secret for the hetzner-token:

#### Creating the secret for the hetzner provider:

```shell
kubectl create secret generic hetzner-token --from-literal=token=$TOKEN
kubectl create secret generic hetzner --from-literal=hcloud=$HCLOUD_TOKEN
```

Creating the "Workload Cluster":
#### Creating the "Workload Cluster":
```shell
$ make create-workload-cluster
```

To delete the "Workload Cluster":
#### To delete the "Workload Cluster":
```shell
$ make delete-workload-cluster
```
Expand Down
Empty file removed docs/developers/jobs.md
Empty file.
5 changes: 3 additions & 2 deletions docs/developers/tilt.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,16 +21,17 @@ Create a `tilt-settings.json` file and place it in your local copy of `cluster-a
"kubernetes_version": "v1.21.1",
"kustomize_substitutions": {
"HCLOUD_TOKEN": "<Your-Token>",
"SSH_KEY": "<SSH-KEY-NAME-IN-HCLOUD>",
"REGION": "fsn1",
"CONTROL_PLANE_MACHINE_COUNT": "3",
"WORKER_MACHINE_COUNT": "3",
"KUBERNETES_VERSION": "v1.21.1",
"HCLOUD_IMAGE_NAME": "test-image",
"HCLOUD_CONTROL_PLANE_MACHINE_TYPE": "cpx31",
"HCLOUD_NODE_MACHINE_TYPE": "cpx31",
"CLUSTER_NAME": "test",
"CLUSTER_NAME": "test"
},
"talos-bootstrap": "false",
"talos-bootstrap": "false"
}
```

Expand Down
Empty file removed docs/topics/api-server-endpoint.md
Empty file.
Empty file removed docs/topics/custom-images.md
Empty file.
Empty file removed docs/topics/failure-domains.md
Empty file.
189 changes: 189 additions & 0 deletions docs/topics/quickstart.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
# Installation

## Common Prerequisites
* Install and setup kubectl in your local environment
* Install Kind and Docker

## Install and/or configure a Kubernetes cluster
Cluster API requires an existing Kubernetes cluster accessible via kubectl. During the installation process the Kubernetes cluster will be transformed into a management cluster by installing the Cluster API provider components, so it is recommended to keep it separated from any application workload.

It is a common practice to create a temporary, local bootstrap cluster which is then used to provision a target management cluster on the selected infrastructure provider.

## Choose one of the options below:

### 1. Existing Management Cluster.
For production use-cases a “real” Kubernetes cluster should be used with appropriate backup and DR policies and procedures in place. The Kubernetes cluster must be at least v1.22.1.
### 2. Kind.
kind can be used for creating a local Kubernetes cluster for development environments or for the creation of a temporary bootstrap cluster used to provision a target management cluster on the selected infrastructure provider.


## Install clusterctl

Please use the instructions here: https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl
or use: `make install-clusterctl`


## Initialize the management cluster
Now that we’ve got clusterctl installed and all the prerequisites in place, let’s transform the Kubernetes cluster into a management cluster by using `clusterctl init`. More informations about clusterctl can be found [here](https://cluster-api.sigs.k8s.io/clusterctl/commands/commands.html).

### Deploying the hetzner provider
The recommended method is using Clusterctl.
#### Register the hetzner provider
$HOME/.cluster-api/clusterctl.yaml

```
providers:
- name: "hetzner"
url: "https://github.com/syself/cluster-api-provider-hetzner/releases/latest/infrastructure-components.yaml"
type: "InfrastructureProvider"
```

#### Initialization of cluster-api provider hetzner

For the latest version:
```shell
clusterctl init --infrastructure hetzner

```
or for a specific version: `clusterctl init --infrastructure hetzner:vX.X.X`

## HA Cluster API Components (optional)
The clusterctl CLI will create all the four needed components cluster-api, cluster-api-bootstrap-provider-kubeadm, cluster-api-control-plane-kubeadm and cluster-api-provider-hetzner.
It uses the respective *-components.yaml from the releases. However, these are not highly available. By scaling the components we can at least reduce the probability of failure. For whom this is not enough could add pdbs.

Scale up the deployments
```shell
kubectl -n capi-system scale deployment capi-controller-manager --replicas=2

kubectl -n capi-kubeadm-bootstrap-system scale deployment capi-kubeadm-bootstrap-controller-manager --replicas=2

kubectl -n capi-kubeadm-control-plane-system scale deployment capi-kubeadm-control-plane-controller-manager --replicas=2

kubectl -n cluster-api-provider-hetzner-system scale deployment caph-controller-manager --replicas=2

```

---
## Create your first workload cluster
Once the management cluster is ready, you can create your first workload cluster.

### Preparing the workload cluster configuration
To create a workload cluster we need to do some preparation:
1. first we need a HCloud project
2. we need to generate an API token with read & write rights.
3. we need to generate a ssh key, upload the public key to HCloud and give it a name.

We export the HCloud token as environment variable to use it later. We do the same with our SSH key name.

#### Required configuration for hetzner provider

```shell
# The project where your cluster will be placed to.
# You have to get a token from your HCloud Project.
export HCLOUD_TOKEN="<YOUR-TOKEN>"
# The SSH Key name you loaded in HCloud
export SSH_KEY="<ssh-key-name>"
# The Image name of your operating system.
export HCLOUD_IMAGE_NAME=test-image
export CLUSTER_NAME="my-cluster"
export REGION="fsn1"
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
export KUBERNETES_VERSION=1.22.1
export HCLOUD_CONTROL_PLANE_MACHINE_TYPE=cpx31
export HCLOUD_NODE_MACHINE_TYPE=cpx31
```

For a list of all variables need for generating a cluster manifest (from the cluster-template.yaml) use `clusterctl generate cluster my-cluster --list-variables`:
```
Required Variables:
- HCLOUD_CONTROL_PLANE_MACHINE_TYPE
- HCLOUD_IMAGE_NAME
- HCLOUD_NODE_MACHINE_TYPE
- REGION
- SSH_KEY

Optional Variables:
- CLUSTER_NAME (defaults to my-cluster)
- CONTROL_PLANE_MACHINE_COUNT (defaults to 1)
- KUBERNETES_VERSION (defaults to 1.21.1)
- WORKER_MACHINE_COUNT (defaults to 1)
```

#### Create a secret for the hetzner provider.

In order for the provider integration hetzner to communicate with the Hetzner API ([HCloud API](https://docs.hetzner.cloud/) + [Robot API](https://robot.your-server.de/doc/webservice/en.html#preface)), we need to create a secret with the access data. The secret must be in the same namespace as the other CRs.

```shell
kubectl create secret generic hetzner --from-literal=hcloud=$HCLOUD_TOKEN
```
The secret name and the tokens can also be customized in the cluster template, however, this is out of scope of the quickstart guide.

### Creating a viable Node Image
For using cluster-api with the bootstrap provider kubeadm, we need a server with all the necessary binaries and settings for running kubernetes.
There are several ways to achieve this. Here in this quick-start guide we use pre-kubeadm commands in the KubeadmControlPlane and KubeadmConfigTemplate object. These are propagated from the bootstrap provider kubeadm and the control plane provider kubeadm to the node as cloud-init commands. This way is usable universally also in other infrastructure providers.
For Hcloud there is an alternative way using packer, that creates a snapshot to boot from, this is in the sense of versioning and the speed of creating a node clearly advantageous.

### Generate your cluster.yaml
The clusterctl generate cluster command returns a YAML template for creating a workload cluster.
Generates a YAML file named my-cluster.yaml with a predefined list of Cluster API objects; Cluster, Machines, Machine Deployments, etc. to be deployed in the current namespace (in case, use the --target-namespace flag to specify a different target namespace).
See also `clusterctl generate cluster --help`.

```shell
clusterctl generate cluster my-cluster --kubernetes-version v1.22.1 --control-plane-machine-count=3 --worker-machine-count=3 > my-cluster.yaml
```

To use for example the hcloud network use a flavor:
```shell
clusterctl generate cluster my-cluster --kubernetes-version v1.22.1 --control-plane-machine-count=3 --worker-machine-count=3 --flavor hcloud-network > my-cluster.yaml
```

For a full list of flavors please check out the [release page](https://github.com/syself/cluster-api-provider-hetzner/releases) all cluster-templates starts with `cluster-template-`. The flavor name is the suffix.

### Apply the workload cluster
```shell
kubectl apply -f my-cluster.yaml
```

### Accessing the workload cluster
The cluster will now start provisioning. You can check status with:
```shell
kubectl get cluster
```
You can also get an “at glance” view of the cluster and its resources by running:
```shell
clusterctl describe cluster my-cluster
```
To verify the first control plane is up:
```shell
kubectl get kubeadmcontrolplane
```
> The control plane won’t be Ready until we install a CNI in the next step.

After the first control plane node is up and running, we can retrieve the workload cluster Kubeconfig:
```shell
export CAPH_WORKER_CLUSTER_KUBECONFIG=/tmp/my-cluster.kubeconfig
clusterctl get kubeconfig my-cluster > $CAPH_WORKER_CLUSTER_KUBECONFIG
```

### Deploy a CNI solution
```shell
helm repo add cilium https://helm.cilium.io/

KUBECONFIG=$(CAPH_WORKER_CLUSTER_KUBECONFIG) helm upgrade --install cilium cilium/cilium --version 1.11.0 \
--namespace kube-system \
-f templates/cilium/cilium.yaml
```
### Deploy HCloud Cloud Controller Manager

For a cluster without private network:

```shell
helm repo add syself https://charts.syself.com

KUBECONFIG=$(CAPH_WORKER_CLUSTER_KUBECONFIG) helm upgrade --install ccm syself/ccm-hcloud --version 1.0.2 \
--namespace kube-system \
--set secret.name=hetzner-token \
--set privateNetwork.enabled=false
```

Empty file removed docs/topics/running-production.md
Empty file.
Empty file removed docs/topics/troubleshooting.md
Empty file.