Skip to content

Commit

Permalink
Merge pull request #13 from kthcloud/kubevirt-new-system
Browse files Browse the repository at this point in the history
Kubevirt new system
  • Loading branch information
saffronjam committed Apr 26, 2024
2 parents 4467b64 + 5414b6b commit ef4d5f0
Show file tree
Hide file tree
Showing 2 changed files with 32 additions and 27 deletions.
11 changes: 5 additions & 6 deletions hugo/content/News/2024-04-14.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,12 @@ We have been working on this project for a few months now and have made signific
| se-flem | se-flem-001 | Pending | Management server for CloudStack, will be migrated last |
| se-flem | se-flem-002 | Migrated | Control-node for deploy-cluster |
| se-flem | se-flem-003 | Migrated | Control-node and worker-node for sys-cluster |
| se-flem | se-flem-006 | Pending | |
| se-flem | se-flem-006 | Migrated | Worker-node for deploy-cluster | |
| se-flem | se-flem-013 | Migrated | Worker-node for deploy-cluster |
| se-flem | se-flem-014 | Pending | |
| se-flem | se-flem-015 | Pending | |
| se-flem | se-flem-016 | Pending | |
| se-flem | se-flem-017 | Pending | |
| se-flem | se-flem-018 | Pending | |
| se-flem | se-flem-015 | Migrated | Worker-node for deploy-cluster | |
| se-flem | se-flem-016 | In Progress | |
| se-flem | se-flem-017 | Migrated | Worker-node for deploy-cluster | |
| se-flem | se-flem-018 | Migrated | Worker-node for deploy-cluster | |
| se-flem | se-flem-019 | Pending | |
| se-kista | t01n05 | Pending | Awaiting migration of all hosts in `se-flem` zone |
| se-kista | t01n14 | Pending | Awaiting migration of all hosts in `se-flem` zone |
Expand Down
48 changes: 27 additions & 21 deletions hugo/content/maintenance/installKubernetesCluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,13 +112,13 @@ This cluster is set up using Rancher, which means that a sys-cluster is required

### Set up nodes
1. Log in [Rancher](https://mgmt.cloud.cbh.kth.se)
2. Navigate to `Global Settings` -> `Settings` and edit `auth-token-max-ttl-minutes` to `0` to disable token expiration.
2. Navigate to `Global Settings` -> `Settings` and edit both `auth-token-max-ttl-minutes` and `kubeconfig-default-token-ttl-minutes` to `0` to disable token expiration.
3. Click on the profile icon in the top right corner and select `Account & API Keys`.
4. Create an API key that does not expire and save the key.\
It will be used when creating cloud-init scripts for nodes connecting to the cluster.
5. Navigate to `Cluster Management` -> `Create` and select `Custom`
6. Fill in the required details for your cluster, such as automatic snapshots.
7. Make sure to **untick** both `CoreDNS` and `NGINX Ingress` as they will be installed by Helm later.
7. Make sure to **untick** `NGINX Ingress` as it will be installed by Helm later.
8. Click `Create` and wait for the cluster to be created.
9. Deploy your node by following [Host provisioning guide](/maintenance/hostProvisioning.md).\
Remember to use the API key you created in step 4 when creating the cloud-init script.
Expand All @@ -140,15 +140,7 @@ export PDNS_API_KEY=
export IP_POOL=
```

2. Install `CoreDNS`
```bash
helm upgrade --install coredns coredns \
--repo https://coredns.github.io/helm \
--namespace kube-system \
--create-namespace
```

3. Install `Ingress-NGINX`
2. Install `Ingress-NGINX`
```bash
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
Expand All @@ -157,7 +149,15 @@ helm upgrade --install ingress-nginx ingress-nginx \
--set controller.ingressClassResource.default=true
```

4. Install `cert-manager`
Edit the created config map and add the following to the `data` section:
```yaml
data:
proxy-buffering: "on"
proxy-buffers: 4 "512k"
proxy-buffer-size: "256k"
```
3. Install `cert-manager`
```bash
helm upgrade --install \
cert-manager \
Expand All @@ -170,7 +170,7 @@ helm upgrade --install \
--set installCRDs=true
```

5. Install cert-manager Webhook for DNS challenges
4. Install cert-manager Webhook for DNS challenges
kthcloud uses PowerDNS for DNS management, so we need to install the cert-manager-webhook for PowerDNS.

```bash
Expand All @@ -182,7 +182,7 @@ helm install \
--set groupName=${DOMAIN} \
```

6. Install cert-manager issuer
5. Install cert-manager issuer
Now that we have the webhook installed, we need to install the issuer that will use the webhook to issue certificates.

Create the PDNS secret (or any other DNS provider secret)
Expand Down Expand Up @@ -243,16 +243,16 @@ spec:
app.kubernetes.io/deploy-name: deploy-wildcard-secret
issuerRef:
kind: ClusterIssuer
name: letsencrypt-prod
name: go-deploy-cluster-issuer
commonName: ""
dnsNames:
- "*.apps.${DOMAIN}"
- "*.app.${DOMAIN}"
- "*.vm-app.${DOMAIN}"
- "*.storage.${DOMAIN}"
EOF
```

7. Install `MetalLB`
6. Install `MetalLB`
```bash
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.4/config/manifests/metallb-native.yaml
```
Expand All @@ -276,14 +276,20 @@ metadata:
EOF
```

8. Install `hairpin-proxy`
Add `metallb.universe.tf/allow-shared-ip: go-deploy` to Ingress-NGINX service to allow MetalLB to use the IP for VMs.
Use Rancher GUI or edit the manifest directly or use the following command:
```bash
kubectl edit svc -n ingress-nginx ingress-nginx-controller
```

7. Install `hairpin-proxy`
Hairpin-proxy is a proxy that allows us to access services in the cluster from within the cluster. This is needed for the webhook to be able to access the cert-manager service when validating DNS challenges.

```bash
kubectl apply -f https://raw.githubusercontent.com/compumike/hairpin-proxy/v0.2.1/deploy.yml
kubectl apply -f https://raw.githubusercontent.com/JarvusInnovations/hairpin-proxy/v0.3.0/deploy.yml
```

9. Install `KubeVirt`
8. Install `KubeVirt`
KubeVirt is what enables us to run VMs in the cluster. This is not mandatory, but it is required if the cluster is to be used for VMs.

Install the KubeVirt operator and CRDs
Expand All @@ -306,7 +312,7 @@ kubectl create -f https://github.com/kubevirt/containerized-data-importer/releas
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
```

10. Install `Velero`
9. Install `Velero`
Velero is a backup and restore tool for Kubernetes. It is used to backup the cluster in case of a disaster. Keep in mind that it does NOT backup persistent volumes in this configuration, but only the cluster state that points to the volumes. This means that the volumes must be backed up separately (either by the application using them or our TrueNAS storage solution).
*Note: You will need the Velero CLI to use Velero commands. You can download it from the [Velero releases page](https://velero.io/docs/v1.8/basic-install)*

Expand Down

0 comments on commit ef4d5f0

Please sign in to comment.