Skip to content

Commit

Permalink
docs: minor modifications to installation section
Browse files Browse the repository at this point in the history
  • Loading branch information
lucafrancescato authored and giorio94 committed Jun 1, 2022
1 parent 70bb438 commit 8e2af13
Show file tree
Hide file tree
Showing 4 changed files with 22 additions and 22 deletions.
18 changes: 9 additions & 9 deletions docs/installation/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Liqo can be installed on a Kubeadm cluster through:
liqoctl install kubeadm
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading process.
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
````
Expand All @@ -57,7 +57,7 @@ Liqo can be installed on an OpenShift Container Platform (OCP) cluster through:
liqoctl install openshift
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading process.
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
````
Expand Down Expand Up @@ -246,7 +246,7 @@ Liqo can be installed on a K3s cluster through:
liqoctl install k3s
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading process.
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
````
Expand All @@ -260,7 +260,7 @@ Liqo can be installed on a KinD cluster through:
liqoctl install kind
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading process.
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
````
Expand All @@ -283,7 +283,7 @@ liqoctl install --api-server-url=<API-SERVER-URL> \
--pod-cidr=<POD-CIDR> --service-cidr=<SERVICE-CIDR>
```
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading process.
By default, the cluster is assigned an automatically generated name, then leveraged during the peering and offloading processes.
Alternatively, you can manually specify a desired name with the `--cluster-name` flag.
````
`````
Expand All @@ -302,8 +302,8 @@ The main control plane flags include:
* `--cluster-name`: configures a **name identifying the cluster** in Liqo.
This name is propagated to remote clusters during the peering process, and used to identify the corresponding virtual nodes and the technical resources leveraged for the negotiation process. Additionally, it is leveraged as part of the suffix to ensure namespace names uniqueness during the offloading process. In case a cluster name is not specified, it is defaulted to that of the cluster in the cloud provider, if any, or it is automatically generated.
* `--cluster-labels`: a set of **labels** (i.e., key/value pairs) **identifying the cluster in Liqo** (e.g., geographical region, Kubernetes distribution, cloud provider, ...) and automatically propagated during the peering process to the corresponding virtual nodes.
These label can then be later used to **restrict workload offloading to a subset of clusters**, as detailed in the [namespace offloading usage section](/usage/namespace-offloading).
* `--sharing-percentage`: the maximum percentage of available **cluster resources** that could be shared with remote cluster.
These labels can be used later to **restrict workload offloading to a subset of clusters**, as detailed in the [namespace offloading usage section](/usage/namespace-offloading).
* `--sharing-percentage`: the maximum percentage of available **cluster resources** that could be shared with remote clusters.

### Networking

Expand Down Expand Up @@ -365,7 +365,7 @@ Development versions include:
* The commits of *pull requests* to the Liqo repository, whose images have been built through the appropriate bot command.

The installation of a development version of Liqo can be triggered specifying a **commit *SHA*** through the `--version` flag.
In this case, *liqoctl* proceeds **cloning the repository** (either from the official repository, or from a fork configured through the `--repo-url` flag) at the given revision, and leveraging the Helm chart therein contained.
In this case, *liqoctl* proceeds to **clone the repository** (either from the official repository, or from a fork configured through the `--repo-url` flag) at the given revision, and to leverage the Helm chart therein contained.
Alternatively, the Helm chart can be retrieved from a **local path**, as configured through the `--local-chart-path` flag.

(InstallationCalicoConfiguration)=
Expand All @@ -379,7 +379,7 @@ However, by default, Calico scans all existing interfaces on a node to detect ne
To prevent misconfigurations, Calico shall then be configured to skip Liqo-managed interfaces during this process.
This is required if Calico is configured in *BGP* mode, while not in case the *VPC native setup* is leveraged.

In Calico v3.17 and above, this can be performed patching the *Installation CR*, adding the following:
In Calico v3.17 and above, this can be performed by patching the *Installation CR*, adding the following:

```yaml
apiVersion: operator.tigera.io/v1
Expand Down
2 changes: 1 addition & 1 deletion docs/installation/liqoctl.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ After reloading your shell, *liqoctl* autocompletion should be working.
The *liqoctl* completion script for Zsh can be generated with the `liqoctl completion zsh` command.
If shell completion is not already enabled in your environment you will need to enable it.
If shell completion is not already enabled in your environment, you will need to enable it.
You can execute the following once:
```zsh
Expand Down
20 changes: 10 additions & 10 deletions docs/installation/requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ This page presents an overview of the main requirements, both in terms of **reso

Typically, Liqo requires very **limited resources** (i.e., in terms of CPU, RAM, and network bandwidth) for the control plane execution, and it is compatible with both standard clusters and more **resource constrained devices** (e.g., Raspberry Pi), leveraging K3s as Kubernetes distribution.

The exact numbers depend on the **number of established peerings and offloaded pods**, as well as on the **cluster size** and whether it is leveraged in testing or production scenarios.
The exact numbers depend on the **number of established peerings and offloaded pods**, as well as on the **cluster size** and whether it is deployed in testing or production scenarios.
As a rule of thumb, the Liqo control plane as a whole, executed on a two-node KinD cluster, peered with a remote cluster, and while offloading 100 pods, conservatively demands for less than:

* Half a CPU core (only during transient periods, while CPU consumption is practically negligible in all the other instants).
* 200 MB of RAM (this metric increases the more pods are offloaded to remote clusters).
* 5 Mbps of cross-cluster control plane traffic (only during transient periods). Data plane traffic, instead, depends on the applications and their placements across the clusters.
* 5 Mbps of cross-cluster control plane traffic (only during transient periods). Data plane traffic, instead, depends on the applications and their actual placements across the clusters.

A thorough analysis of the Liqo performance compared to vanilla Kubernetes, including the characterization of the resources consumed by Liqo, is presented in a [dedicated blog post](https://medium.com/the-liqo-blog/benchmarking-liqo-kubernetes-multi-cluster-performance-d77942d7f67c).

Expand Down Expand Up @@ -40,26 +40,26 @@ This implies also that any network device (**NAT**, **firewall**, etc.) sitting

The tuple *<IP/port>* exported by the Liqo services (i.e., `liqo-auth`, `liqo-gateway`) depends on the Liqo configuration, chosen at installation time, which may depend on the physical setup of your cluster and the characteristics of your service.

**Authentication Service**: when you install Liqo, you can choose to expose the authentication service through a *LoadBalancer* service, a *NodePort* service, or an *Ingress* (the latter allows the service to be exposed as *ClusterIP*).
**Authentication Service**: when you install Liqo, you can choose to expose the authentication service through a *LoadBalancer* service, a *NodePort* service, or an *Ingress* (the last allows the service to be exposed as *ClusterIP*).
This choice depends (1) on your necessities, (2) on the cluster configuration (e.g., a *NodePort* cannot be used if your nodes have private IP addresses, hence cannot be reached from the Internet), and (3) whether the above primitives (e.g., the *Ingress Controller*) are available in your cluster.

**Network Gateway**: the same applies also for the network gateway, except that it cannot be exported through an *Ingress*.
In fact, while the authentication service uses a standard HTTP/REST interface, the network gateway is the termination of a UDP-based network tunnel; hence only *LoadBalancer* and *NodePort* services are supported.

```{admonition} Note
Liqo supports scenarios in which only one of the two network gateway is publicly reachable from the remote cluster (i.e., in terms of *<IP/port>* tuple), although communication must be allowed by possible firewalls sitting in the path.
Liqo supports scenarios in which, given two clusters, only one of the two network gateways is publicly reachable from the remote cluster (i.e., in terms of *<IP/port>* tuple), although communication must be allowed by possible firewalls sitting in the path.
```

By default, *liqoctl* exposes both the authentication service and the network gateway through a **dedicated *LoadBalancer* service**, falling back to a *NodePort* for simpler setups (i.e., KinD and K3s).
However, more advanced configurations can be achieved by configuring the proper [Helm chart parameters](https://github.com/liqotech/liqo/tree/master/deployments/liqo), either directly or customizing the installation process [through *liqoctl*](InstallCustomization).
However, more advanced configurations can be achieved by configuring the proper [Helm chart parameters](https://github.com/liqotech/liqo/tree/master/deployments/liqo), either directly or by customizing the installation process [through *liqoctl*](InstallCustomization).

An overview of the overall connectivity requirements to establish out-of-band control plane peerings in Liqo is shown in the figure below.

![Out-of-band peering network requirements](/_static/images/installation/requirements/out-of-band.drawio.svg)

#### Additional considerations

The choice of the way you expose Liqo services to remote cluster may not be trivial in some cases.
The choice of the way you expose Liqo services to remote clusters may not be trivial in some cases.
Here, we list some additional notes you should consider in your choice:

* **NodePort service**: although a *NodePort* service can be used to expose the authentication service and the network gateway, often the IP addresses of the nodes are configured with private IP addresses, hence not being suitable for connections originated from the Internet.
Expand All @@ -78,26 +78,26 @@ Yet, in such situations, we suggest leveraging the in-band peering, as it simpli
The establishment of an in-band control plane peering with a remote cluster requires only that the **network gateways are *mutually* reachable**, since all the Liqo control plane traffic is then configured to flow inside the VPN tunnel.
All considerations presented above and referring to the exposition of the network gateway apply also in this case.

Given the connectivity requirements are a subset, this solution is compatible with the configurations enabling the out-of-band peering approach.
Given the connectivity requirements are a subset of the previous case, this solution is compatible with the configurations that enable the out-of-band peering approach.
Additionally, it:

* Supports scenarios characterized by a **non publicly accessible Kubernetes API Server**.
* Allows to expose the authentication service as a *ClusterIP* service, reducing the number of externally exposed services.
* Enables setups with one cluster **behind NAT**, since the VPN tunnel can be established successfully even in case only one of the two network gateways is publicly reachable from the other cluster.

An overview of the overall connectivity requirements to establish in-band peerings in Liqo is shown in the figure below.
An overview of the overall connectivity requirements to establish in-band control plane peerings in Liqo is shown in the figure below.

![In-band peering network requirements](/_static/images/installation/requirements/in-band.drawio.svg)

```{warning}
Due to current limitations, the establishment of an in-band peering may not complete successfully in case the authentication service is exposed through an Ingress, delegating to it TLS termination (i.e., when TLS is disabled on the authentication service).
Due to current limitations, the establishment of an in-band peering may not complete successfully in case the authentication service is exposed through an Ingress to which the TLS termination is delegated (i.e., when TLS is disabled on the authentication service).
```

(RequirementsConnectivityFirewall)=

### Network firewalls

In some cases, especially on production setups, additional network limitations are present, such as firewalls that may impair network connectivity, which must be considered in order to enable Liqo peering.
In some cases, especially on production setups, additional network limitations are present, such as firewalls that may impair network connectivity, which must be considered in order to enable Liqo peerings.

Depending on your configuration and the selected peering approach, you may have to configure existing firewalls to enable remote clusters to contact either the `liqo-gateway` only or all the three endpoints (i.e., `liqo-auth`, `liqo-gateway` and Kubernetes API server) that need to be publicly accessible in the peering phase.

Expand Down
4 changes: 2 additions & 2 deletions docs/installation/uninstall.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Uninstall

Liqo can be uninstalled leveraging the dedicated *liqoctl* command:
Liqo can be uninstalled by leveraging the dedicated *liqoctl* command:

```bash
liqoctl uninstall
Expand All @@ -20,7 +20,7 @@ To this end, *liqoctl* performs a set of pre-checks and aborts the process in ca
## Purge CRDs

By default, the uninstallation process does not remove the Liqo CRDs and the system namespaces.
These operations can be performed adding the `--purge` flag:
These operations can be performed by adding the `--purge` flag:

```bash
liqoctl uninstall --purge
Expand Down

0 comments on commit 8e2af13

Please sign in to comment.