Skip to content

Commit

Permalink
Docs: install and offloading docs rephrasing
Browse files Browse the repository at this point in the history
  • Loading branch information
frisso authored and adamjensenbot committed May 22, 2023
1 parent a8ac245 commit 3801639
Show file tree
Hide file tree
Showing 2 changed files with 42 additions and 41 deletions.
6 changes: 4 additions & 2 deletions docs/features/offloading.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
This solution enables the **transparent extension** of the local cluster, with the new node (and its capabilities) seamlessly taken into account by the vanilla Kubernetes scheduler when selecting the best place for the workloads execution.
At the same time, this approach is fully compliant with the **standard Kubernetes APIs**, hence allowing to interact with and inspect offloaded pods just as if they were executed locally.

(FeatureOffloadingAssignedResources)=

## Assigned resources

By default, the virtual node is assigned with 90% of the resources available in the remote cluster. For example:
Expand All @@ -14,7 +16,7 @@ By default, the virtual node is assigned with 90% of the resources available in
* If the remote cluster has some autoscaling mechanism that, at some point, double the size of the cluster, which reaches 200 vCPUs (all of them unused by any pod), the virtual node will be resized with 180 vCPUs.

This mechanism applies to all the physical resources available in the remote cluster, e.g., CPUs, RAM, GPUs and more.
The percentage of sharing can be customized also at run-time using the `--sharing-percentage` option, as documented in the proper [section](https://docs.liqo.io/en/latest/installation/install.html#control-plane) of the Liqo installation.
The percentage of sharing can be customized also at run-time using the `--sharing-percentage` option, as documented in the proper [section](InstallControlPlaneFlags) of the Liqo installation.

```{warning}
Pay attention to _math rounding_. For instance, if your remote cluster has 1 GPU, with default settings the virtual node will be set with 0.9 GPUs. Since numbers must be integers, you may end up with a virtual node with _zero_ GPUs.
Expand Down Expand Up @@ -74,7 +76,7 @@ The extension of a namespace, forcing at the same time all pods to be scheduled
## Pod offloading

Once a **pod is scheduled onto a virtual node**, the corresponding Liqo virtual kubelet (indirectly) creates a **twin pod object** in the remote cluster for actual execution.
Liqo supports the offloading of both **stateless** and **stateful** pods, the latter either relying on the provided [**storage fabric**](/features/storage-fabric.md) or leveraging externally managed solutions (e.g., persistent volumes provided by the cloud provider infrastructure).
Liqo supports the offloading of both **stateless** and **stateful** pods, the latter either relying on the provided [**storage fabric**](/features/storage-fabric) or leveraging externally managed solutions (e.g., persistent volumes provided by the cloud provider infrastructure).

**Remote pod resiliency** (hence, service continuity), even in case of temporary connectivity loss between the two control planes, is ensured through a **custom resource** (i.e., *ShadowPod*) wrapping the pod definition, and triggering a Liqo enforcement logic running in the remote cluster.
This guarantees that the desired pod is always present, without requiring the intervention of the originating cluster.
Expand Down
77 changes: 38 additions & 39 deletions docs/installation/install.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,20 @@
# Install

The deployment of all the Liqo components is managed through a **Helm chart**.
Liqo can be easily installed with *liqoctl*, which automatically handles all the customized settings required to set up the software on the multiple provider/distribution supported (e.g., AWS, EKS, GKE, Kubeadm, etc.).
Under the hood, *liqoctl* uses [Helm 3](https://helm.sh/) to configure and install all the Liqo components, using the Helm chart available in the official repository.

We strongly recommend **installing Liqo using *liqoctl***, as it automatically handles the required customizations for each supported provider/distribution (e.g., AWS, EKS, GKE, Kubeadm, etc.).
Under the hood, *liqoctl* uses [Helm 3](https://helm.sh/) to configure and install the Liqo Helm chart available on the official repository.

Alternatively, *liqoctl* can also be configured to output a **pre-configured values file**, which can be further customized and used for a manual installation with Helm.
Alternatively, *liqoctl* can also be configured to generate a local file with **pre-configured values**, which can be further customized and used for a manual installation with Helm.

## Install with liqoctl

Below, you can find the basic information to install and configure Liqo, depending on the selected **Kubernetes distribution** and/or **cloud provider**.
By default, *liqoctl install* installs the latest stable version of Liqo, although it can be tuned through the `--version` flag.
By default, *liqoctl install* installs the latest *stable* version of Liqo, although this can be changed with the `--version` flag.

The reminder of this page, then, presents **additional customization options** which apply to all setups, as well as advanced options.
The rest of this page presents **additional customization options** that apply to all setups, as well as advanced options that are cloud/distribution-specific.

```{admonition} Note
*liqoctl* displays a *kubectl* compatible behavior concerning Kubernetes API access, hence supporting the `KUBECONFIG` environment variable, as well as all the standard flags, including `--kubeconfig` and `--context`.
Ensure you selected the correct target cluster before issuing *liqoctl* commands (as you would do with *kubectl*).
*liqoctl* implements a *kubectl* compatible behavior with respect to Kubernetes API access, hence supporting the `KUBECONFIG` environment variable, as well as all the standard flags, including `--kubeconfig` and `--context`.
Hence, make sure you selected the correct target cluster before issuing *liqoctl* commands (as you would do with *kubectl*).
```

`````{tab-set}
Expand All @@ -39,13 +37,13 @@ Additionally, known limitations concern the impossibility of accessing the backe
**Installation**
Liqo can be installed on a Kubeadm cluster through:
Liqo can be installed on a Kubeadm cluster with the following command:
```bash
liqoctl install kubeadm
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
The name of the cluster is automatically generated, then used during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
```{admonition} Service Type
Expand All @@ -58,17 +56,17 @@ To change this behavior, check the [network flags](NetworkFlags).
**Supported versions**
Liqo was tested running on OpenShift Container Platform (OCP) 4.8.
Liqo was tested on OpenShift Container Platform (OCP) 4.8.
**Installation**
Liqo can be installed on an OpenShift Container Platform (OCP) cluster through:
Liqo can be installed on an OpenShift Container Platform (OCP) cluster with the following command:
```bash
liqoctl install openshift
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
The name of the cluster is automatically generated, then used during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
```{admonition} Service Type
Expand All @@ -91,7 +89,7 @@ To install Liqo on AKS, you should first log in using the `az` CLI (if not alrea
az login
```
Before continuing, you should export a few variables about the properties of your cluster:
Before continuing, you should export the following variables with some information about your cluster:
```bash
# The resource group where the cluster is created
Expand All @@ -108,16 +106,16 @@ During the installation process, you need read-only permissions on the AKS clust
**Installation**
Liqo can be installed on an AKS cluster through:
Liqo can be installed on an AKS cluster with the following command:
```bash
liqoctl install aks --resource-group-name "${AKS_RESOURCE_GROUP}" \
--resource-name "${AKS_RESOURCE_NAME}" \
--subscription-name "${AKS_SUBSCRIPTION_ID}"
```
By default, the cluster is assigned the same name as that specified through the `--resource-name` parameter.
Alternatively, you can manually specify a different name with the `--cluster-name` *liqoctl* flag.
The name of the cluster will be equal to the one specified in the `--resource-name` parameter.
Alternatively, you can manually set a different name with the `--cluster-name` *liqoctl* flag.
```{admonition} Note
If you are running an [AKS private cluster](https://learn.microsoft.com/en-us/azure/aks/private-clusters), you may need to set the `--disable-api-server-sanity-check` *liqoctl* flag, since the API Server in your kubeconfig may be different from the one retrieved from the Azure APIs.
Expand Down Expand Up @@ -187,7 +185,7 @@ The minimum **IAM** permissions required by a user to install Liqo are the follo
}
```
Before continuing, you should export a few variables about the properties of your cluster:
Before continuing, you should export the following variables with some information about your cluster:
```bash
# The name of the target cluster
Expand All @@ -196,24 +194,23 @@ export EKS_CLUSTER_NAME=cluster-name
export EKS_CLUSTER_REGION=cluster-region
```
Then, you should retrieve the cluster's kubeconfig, if you have not already.
You may use the following CLI command:
Then, you should retrieve the cluster's kubeconfig (if you have not done it already) with the following CLI command:
```bash
aws eks --region ${EKS_CLUSTER_REGION} update-kubeconfig --name ${EKS_CLUSTER_NAME}
```
**Installation**
Liqo can be installed on an EKS cluster through:
Liqo can be installed on an EKS cluster with the following command:
```bash
liqoctl install eks --eks-cluster-region=${EKS_CLUSTER_REGION} \
--eks-cluster-name=${EKS_CLUSTER_NAME}
```
By default, the cluster is assigned the same name as that specified through the `--eks-cluster-name` parameter.
Alternatively, you can manually specify a different name with the `--cluster-name` *liqoctl* flag.
The name of the cluster will be equal to the one specified in the `--eks-cluster-name` parameter.
Alternatively, you can manually set a different name with the `--cluster-name` *liqoctl* flag.
```{admonition} Service Type
By default, the **EKS** provider exposes *liqo-auth* and *liqo-gateway* with **LoadBalancer** services.
Expand All @@ -235,7 +232,7 @@ Liqo does not support GKE Autopilot Clusters.
To install Liqo on GKE, you should create a service account for *liqoctl*, granting the read rights for the GKE clusters (you may reduce the scope to a specific cluster if you prefer).
First, let's start exporting a few variables about the properties of your cluster and the service account to create:
First, you should export the following variables with some information about your cluster and the service account to create:
```bash
# The name of the service account used by liqoctl to interact with GCP
export GKE_SERVICE_ACCOUNT_ID=liqoctl
Expand Down Expand Up @@ -282,15 +279,14 @@ gcloud iam service-accounts keys create ${GKE_SERVICE_ACCOUNT_PATH} \
--iam-account=${GKE_SERVICE_ACCOUNT_ID}@${GKE_PROJECT_ID}.iam.gserviceaccount.com
```
Finally, you should retrieve the cluster’s kubeconfig, if you have not already.
You may use the following CLI command, in case of zonal GKE clusters:
Finally, you should retrieve the cluster’s kubeconfig (if you have not done it already) with the following CLI command in case of **zonal** GKE clusters:
```bash
gcloud container clusters get-credentials ${GKE_CLUSTER_ID} \
--zone ${GKE_CLUSTER_ZONE} --project ${GKE_PROJECT_ID}
```
or, in case of regional GKE clusters:
or, in case of **regional** GKE clusters:
```bash
gcloud container clusters get-credentials ${GKE_CLUSTER_ID} \
Expand All @@ -301,7 +297,7 @@ The retrieved kubeconfig will be added to the currently selected file (i.e., bas
**Installation**
Liqo can be installed on a zonal GKE cluster through:
Liqo can be installed on a zonal GKE cluster with the following command:
```bash
liqoctl install gke --project-id ${GKE_PROJECT_ID} \
Expand All @@ -319,8 +315,8 @@ liqoctl install gke --project-id ${GKE_PROJECT_ID} \
--credentials-path ${GKE_SERVICE_ACCOUNT_PATH}
```
By default, the cluster is assigned the same name as that assigned in GCP.
Alternatively, you can manually specify a different name with the `--cluster-name` *liqoctl* flag.
The name of the cluster will be equal to the one defined in GCP.
Alternatively, you can manually set a different name with the `--cluster-name` *liqoctl* flag.
```{admonition} Service Type
By default, the **GKE** provider exposes *liqo-auth* and *liqo-gateway* with **LoadBalancer** services.
Expand All @@ -337,7 +333,7 @@ Make sure to properly refer to it when using *liqoctl* (e.g., setting the `KUBEC
**Installation**
Liqo can be installed on a K3s cluster through:
Liqo can be installed on a K3s cluster with the following command:
```bash
liqoctl install k3s
Expand All @@ -346,7 +342,7 @@ liqoctl install k3s
You may additionally set the `--api-server-url` flag to override the Kubernetes API Server address used by remote clusters to contact the local one.
This operation is necessary in case the default address (`https://<control-plane-node-ip>:6443`) is unsuitable (e.g., the node IP is externally remapped).
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
The name of the cluster is automatically generated, then used during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
```{admonition} Service Type
Expand All @@ -359,13 +355,13 @@ To change this behavior, check the [network flags](NetworkFlags).
**Installation**
Liqo can be installed on a KinD cluster through:
Liqo can be installed on a KinD cluster with the following command:
```bash
liqoctl install kind
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
The name of the cluster is automatically generated, then used during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
```{admonition} Service Type
By default, the **kind** provider exposes *liqo-auth* and *liqo-gateway* with **NodePort** services.
Expand All @@ -377,22 +373,22 @@ To change this behavior, check the [network flags](NetworkFlags).
**Configuration**
To install Liqo on alternative Kubernetes distributions, it is necessary to manually retrieve three main configuration parameters:
To install Liqo on alternative Kubernetes distributions, you should manually retrieve three main configuration parameters:
* **API Server URL**: the Kubernetes API Server URL (defaults to the one specified in the kubeconfig).
* **Pod CIDR**: the range of IP addresses used by the cluster for the pod network.
* **Service CIDR**: the range of IP addresses used by the cluster for service VIPs.
**Installation**
Once retrieved the above parameters, Liqo can be installed on a generic cluster through:
Once retrieved the above parameters, Liqo can be installed on a generic cluster with the following command:
```bash
liqoctl install --api-server-url=<API-SERVER-URL> \
--pod-cidr=<POD-CIDR> --service-cidr=<SERVICE-CIDR>
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
The name of the cluster is automatically generated, then used during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
```{admonition} Service Type
Expand Down Expand Up @@ -424,15 +420,18 @@ The main global flags, besides those concerning the installation of [development
Once expired, the process is aborted and Liqo is rolled back to the previous version.
* `--verbose`: enables verbose logs, providing additional information concerning the installation/upgrade process (e.g., for troubleshooting).

(InstallControlPlaneFlags)=

### Control plane

The main control plane flags include:

* `--cluster-name`: configures a **name identifying the cluster** in Liqo.
This name is propagated to remote clusters during the peering process, and used to identify the corresponding virtual nodes and the technical resources leveraged for the negotiation process. Additionally, it is leveraged as part of the suffix to ensure namespace names uniqueness during the offloading process. In case a cluster name is not specified, it is defaulted to that of the cluster in the cloud provider, if any, or it is automatically generated.
This name is propagated to remote clusters during the peering process, and used to identify the corresponding virtual nodes and the Liqo resources used in the peering process. Additionally, the cluster name is used as part of the suffix to ensure namespace names uniqueness during the offloading process. In case a cluster name is not specified, it is defaulted to that of the cluster in the cloud provider, if any, or it is automatically generated.
* `--cluster-labels`: a set of **labels** (i.e., key/value pairs) **identifying the cluster in Liqo** (e.g., geographical region, Kubernetes distribution, cloud provider, ...) and automatically propagated during the peering process to the corresponding virtual nodes.
These labels can be used later to **restrict workload offloading to a subset of clusters**, as detailed in the [namespace offloading usage section](/usage/namespace-offloading).
* `--sharing-percentage`: the maximum percentage of available **cluster resources** that could be shared with remote clusters. This is the Liqo's default behavior, which can be changed by deploying a custom [resource plugin](https://github.com/liqotech/liqo-resource-plugins).
More details about the amount of resources shared by a cluster is available in the [Resource Offloading](FeatureOffloadingAssignedResources) page.
**Note**: the `--sharing-percentage` can be updated (e.g., via helm) dynamically, without reinstalling Liqo.

(NetworkFlags)=
Expand Down

0 comments on commit 3801639

Please sign in to comment.