Skip to content

Commit

Permalink
New Extended tutorial for Liqo.
Browse files Browse the repository at this point in the history
This commit provides a more in-depth tutorial showing how to use Liqo's main features.
  • Loading branch information
Andreagit97 authored and palexster committed Aug 3, 2021
1 parent 9e2ec55 commit c7f30e1
Show file tree
Hide file tree
Showing 20 changed files with 1,198 additions and 18 deletions.
99 changes: 99 additions & 0 deletions docs/examples/extendedGettingStarted/3_kind_clusters.sh
@@ -0,0 +1,99 @@
#!/bin/bash

function setup_arch_and_os(){
ARCH=$(uname -m)
case $ARCH in
armv5*) ARCH="armv5";;
armv6*) ARCH="armv6";;
armv7*) ARCH="arm";;
aarch64) ARCH="arm64";;
x86) ARCH="386";;
x86_64) ARCH="amd64";;
i686) ARCH="386";;
i386) ARCH="386";;
*) echo "Error architecture '${ARCH}' unknown"; exit 1 ;;
esac

OS=$(uname |tr '[:upper:]' '[:lower:]')
case "$OS" in
# Minimalist GNU for Windows
"mingw"*) OS='windows'; return ;;
esac

# list is available for kind at https://github.com/kubernetes-sigs/kind/releases
# kubectl supported architecture list is a superset of the Kind one. No need to further compatibility check.
local supported="darwin-amd64\n\nlinux-amd64\nlinux-arm64\nlinux-ppc64le\nwindows-amd64"
if ! echo "${supported}" | grep -q "${OS}-${ARCH}"; then
echo "Error: No version of kind for '${OS}-${ARCH}'"
return 1
fi

}

setup_arch_and_os

CLUSTER_NAME=cluster
CLUSTER_NAME_1=${CLUSTER_NAME}1
CLUSTER_NAME_2=${CLUSTER_NAME}2
CLUSTER_NAME_3=${CLUSTER_NAME}3
KIND_VERSION="v0.10.0"
KUBECTL_DOWNLOAD=false

echo "Downloading Kind ${KIND_VERSION}"
TMPDIR=$(mktemp -d -t liqo-install.XXXXXXXXXX)
BINDIR="${TMPDIR}/bin"
mkdir -p "${BINDIR}"

if ! command -v docker &> /dev/null;
then
echo "MISSING REQUIREMENT: docker engine could not be found on your system. Please install docker engine to continue: https://docs.docker.com/get-docker/"
return 1
fi

if ! command -v kubectl &> /dev/null
then
echo "WARNING: kubectl could not be found. Downloading and installing it locally..."
if ! curl --fail -Lo "${BINDIR}"/kubectl "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/${OS}/${ARCH}/kubectl"; then
echo "Error: Unable to download kubectl for '${OS}-${ARCH}'"
return 1
fi
chmod +x "${BINDIR}"/kubectl
export PATH=${PATH}:${BINDIR}
KUBECTL_DOWNLOAD=true
fi

curl -Lo "${BINDIR}"/kind https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-${OS}-${ARCH}
chmod +x "${BINDIR}"/kind
KIND="${BINDIR}/kind"

echo -e "\nCleaning: Deleting old clusters"
${KIND} delete cluster --name $CLUSTER_NAME_1
${KIND} delete cluster --name $CLUSTER_NAME_2
${KIND} delete cluster --name $CLUSTER_NAME_3
echo -e "\n"


cat << EOF > liqo-cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
serviceSubnet: "10.90.0.0/12"
podSubnet: "10.200.0.0/16"
nodes:
- role: control-plane
image: kindest/node:v1.19.1
EOF

${KIND} create cluster --name $CLUSTER_NAME_1 --kubeconfig liqo_kubeconf_1 --config liqo-cluster-config.yaml --wait 2m
echo -e "\n ---------------- \n"
${KIND} create cluster --name $CLUSTER_NAME_2 --kubeconfig liqo_kubeconf_2 --config liqo-cluster-config.yaml --wait 2m
echo -e "\n ---------------- \n"
${KIND} create cluster --name $CLUSTER_NAME_3 --kubeconfig liqo_kubeconf_3 --config liqo-cluster-config.yaml --wait 2m
echo -e "\n ---------------- \n"

if [ "$KUBECTL_DOWNLOAD" = "true" ]; then
echo -e "\nkubectl is now installed in ${BINDIR}/kubectl and has been added to your PATH. To make it available without explicitly setting the PATH variable"
echo "You can copy it to a system-wide location such as /usr/local/bin by typing:"
echo "sudo cp ${BINDIR}/kubectl /usr/local/bin"
fi
echo "INSTALLATION COMPLETED";
24 changes: 24 additions & 0 deletions docs/examples/extendedGettingStarted/helm_liqo.sh
@@ -0,0 +1,24 @@
#!/bin/bash

region=(eu-west us-west eu-east)
provider=(Azure AWS GKE)
kubeconfigs=(./liqo_kubeconf_1 ./liqo_kubeconf_2 ./liqo_kubeconf_3)
echo -e "\n\nSTART LIQO INSTALLATION:"
echo -e "---------------------------\n";
for i in {1..3};
do
echo -e "Install Liqo on cluster-${i}\n"
export KUBECONFIG=${kubeconfigs[$i]}
helm install liqo --namespace "liqo" \
--set auth.config.allowEmptyToken=true \
--set discovery.config.clusterName="cluster-$i" \
--set discovery.config.clusterLabels."topology\.liqo\.io/region"="${region[$i]}" \
--set discovery.config.clusterLabels."liqo\.io/provider"="${provider[$i]}" \
--set discovery.config.autojoin=false \
--set networkManager.config.podCIDR="10.200.0.0/16" \
--set networkManager.config.serviceCIDR="10.90.0.0/12" \
--create-namespace
echo "Liqo installed on cluster-${i}"
echo -e "---------------------------\n";
done
echo -e "LIQO INSTALLATION COMPLETED";
13 changes: 13 additions & 0 deletions docs/examples/extendedGettingStarted/namespaceOff_test1.yaml
@@ -0,0 +1,13 @@
apiVersion: offloading.liqo.io/v1alpha1
kind: NamespaceOffloading
metadata:
name: offloading
namespace: liqo-test
spec:
clusterSelector:
nodeSelectorTerms:
- matchExpressions:
- key: liqo.io/provider
operator: In
values:
- GKE
8 changes: 8 additions & 0 deletions docs/examples/extendedGettingStarted/namespaceOff_test2.yaml
@@ -0,0 +1,8 @@
apiVersion: offloading.liqo.io/v1alpha1
kind: NamespaceOffloading
metadata:
name: offloading
namespace: liqo-test
spec:
namespaceMappingStrategy: EnforceSameName
podOffloadingStrategy: Remote
21 changes: 21 additions & 0 deletions docs/examples/extendedGettingStarted/pod-violation.yaml
@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-remote-aws
spec:
containers:
- name: nginx
image: nginxdemos/hello
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: liqo.io/provider
operator: In
values:
- AWS
83 changes: 83 additions & 0 deletions docs/examples/extendedGettingStarted/test1.yaml
@@ -0,0 +1,83 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-remote-deployment
labels:
app: liqo-test
spec:
replicas: 1
selector:
matchLabels:
app: liqo-test
zone: remote
template:
metadata:
labels:
app: liqo-test
zone: remote
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: liqo.io/type
operator: In
values:
- virtual-node
containers:
- name: nginx
image: nginxdemos/hello
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-local-deployment
labels:
app: liqo-test
spec:
replicas: 1
selector:
matchLabels:
app: liqo-test
zone: local
template:
metadata:
labels:
app: liqo-test
zone: local
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: liqo.io/type
operator: NotIn
values:
- virtual-node
containers:
- name: nginx
image: nginxdemos/hello
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
---
apiVersion: v1
kind: Service
metadata:
name: liqo-test
spec:
ports:
- name: web
port: 80
protocol: TCP
targetPort: web
selector:
app: liqo-test
type: ClusterIP
37 changes: 37 additions & 0 deletions docs/examples/extendedGettingStarted/test2.yaml
@@ -0,0 +1,37 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: liqo-test
spec:
replicas: 3
selector:
matchLabels:
app: liqo-test
template:
metadata:
labels:
app: liqo-test
spec:
containers:
- name: nginx
image: nginxdemos/hello
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
---
apiVersion: v1
kind: Service
metadata:
name: liqo-test
spec:
ports:
- name: web
port: 80
protocol: TCP
targetPort: web
selector:
app: liqo-test
type: ClusterIP
22 changes: 20 additions & 2 deletions docs/pages/GettingStarted/Extended/_index.md
@@ -1,4 +1,22 @@
---
title: Extended Tutorial
weight: 1
---
weight: 2
---

The following steps will guide you through a tour to learn how to use the main Liqo features.

* [Set up the Playground](./kind): Deploy 3 Kubernetes in Docker (KiND) clusters.
* [Install Liqo](./install): Install Liqo on all the 3 clusters.
* [Enable Peering](./peer): The cluster-home establishes a peering with each remote cluster.
* [Liqo main features](./select_clusters): Focus on Liqo main features, starting from the selective offloading on a specific cluster set.
* [Hard constraints](./hard_constraints): Explain how Liqo manages the offloading constraints' violation.
* [Change topology](./change_topology): Show how promptly change the deployment topology.
* [Remote service access](./remote_service_access): Explain how to contact a service from the local cluster even if the endpoints are all deployed remotely.
* [Dynamic topology](./dynamic_topology): Figure out how Liqo makes deployment topologies extremely dynamic.
* [Uninstall Liqo](./uninstall): Uninstall Liqo from your cluster.






67 changes: 67 additions & 0 deletions docs/pages/GettingStarted/Extended/change_topology.md
@@ -0,0 +1,67 @@
---
title: Change topology
weight: 6
---

It may be necessary to promptly change your topology following some events or simply to meet new needs.
With Liqo it is possible to do it in two simple steps:

1. Remove the old configuration in the Liqo namespace, so the old NamespaceOffloading resource.
2. Create the new resource with a new configuration.

Simple right? Let's see it together.

### Remove the old resource

You can remove the old NamespaceOffloading by simply typing:

```bash
export KUBECONFIG=$KUBECONFIG_1
kubectl delete namespaceoffloadings offloading -n liqo-test
```

{{% notice warning %}}
Deleting the NamespaceOffloading you will delete all the remote namespaces previously created with all their content.
You need to be sure that everything you have deployed remotely is no longer useful.
{{% /notice %}}

As we can imagine there is no longer any remote namespace inside the **cluster-3**:

```bash
export KUBECONFIG=$KUBECONFIG_3
kubectl get namespaces
```

### Create the new resource

Now we can imagine creating a new configuration like this:

* All remote clusters selected.
* Pods could be deployed only remotely.
* The remote namespaces must have the same name of the local one.

The namespaceOffloading resource will be like this:

{{% render-code file="static/examples/extendedGettingStarted/namespaceOff_test2.yaml" language="yaml" %}}

It is not necessary to specify a clusterSelector field to select all available cluster as we have seen in [the configuration section](#).

We can create the new resource in the "**liqo-test**" namespace:

```bash
export KUBECONFIG=$KUBECONFIG_1
kubectl apply -f https://raw.githubusercontent.com/Andreagit97/liqo/atr/GettingStarted/docs/examples/extendedGettingStarted/namespaceOff_test2.yaml -n liqo-test
```

We should now see the two remote namespaces: one inside the **cluster-2** and the other inside **cluster-3**:

```bash
export KUBECONFIG=$KUBECONFIG_2
kubectl get namespaces liqo-test
```
```bash
export KUBECONFIG=$KUBECONFIG_3
kubectl get namespaces liqo-test
```

Now that our new topology has been created we can see [how to contact remote pods from our local cluster](../remote_service_access)

0 comments on commit c7f30e1

Please sign in to comment.