Skip to content
This repository has been archived by the owner on May 13, 2024. It is now read-only.

Simple MCS Demo with 3 k3s clusters by ErieCanal

CaiShu edited this page Jan 13, 2023 · 13 revisions

Here is the steps on how to setup a 3 k3s clusters demo. It's the demo used for OSM community call on Dec 13. And here is the slides used on that community call : OSM MCS Demo by ErieCanal (Slides)

Step 1 : Prepare 3 k3s clusters

In this demo, we will use 3 k3s clusters. cluster-1 is for ErieCanal control plane and service-consumer; cluster-2 & cluster-3 are for service-providers. Traffic from service-consumer will be load balanced to cluster-2 & cluster-3 by ErieCanal.

We use three Azure VMs, each runs a single node k3s. Here is how to install k3s on Azure.

root@caishu-test3:~# umount /sys/fs/cgroup/cpu\,cpuacct 
#In case need to use China mirror : curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn INSTALL_K3S_EXEC="--disable traefik" sh -
root@caishu-test3:~# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -s -
[INFO]  Finding release for channel stable
[INFO]  Using v1.25.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.25.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

As ErieCanal will install builtin ingress, so we do not install k3s default traefik ingress.

Check k3s status:

root@caishu-test3:~# kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   local-path-provisioner-79f67d76f8-fpg8s   1/1     Running   0          23m
kube-system   coredns-597584b69b-s66hk                  1/1     Running   0          23m
kube-system   metrics-server-5c8978b444-t2drg           1/1     Running   0          23m
root@caishu-test3:~# kubectl get nodes -A
NAME           STATUS   ROLES                  AGE   VERSION
caishu-test3   Ready    control-plane,master   24m   v1.25.4+k3s1

Pay attention here : we need to run this step on all three Azure VMs.

Step 2 : Install ErieCanal to the 3 clusters

Follow instructions from ErieCanal readme(https://github.com/flomesh-io/ErieCanal#installing-the-chart), do helm install on all three clusters:

root@caishu-test3:~# ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config
root@caishu-test3:~# helm repo add erie-canal https://flomesh-io.github.io/ErieCanal
"erie-canal" has been added to your repositories
root@caishu-test3:~# helm install erie-canal erie-canal/erie-canal --namespace erie-canal --create-namespace --version=0.1.0-alpha.1
NAME: erie-canal
LAST DEPLOYED: Wed Dec 14 06:22:50 2022
NAMESPACE: erie-canal
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Congratulations! The ErieCanal control plane has been installed in your Kubernetes cluster!

Confirm all three ErieCanal pods are running(on all three clusters):

root@caishu-test3:~# kubectl get pods -n erie-canal
NAME                                       READY   STATUS    RESTARTS   AGE
erie-canal-repo-78b558c9f8-66nk8           1/1     Running   0          3m2s
erie-canal-manager-845c558489-np6r2        1/1     Running   0          3m2s
erie-canal-ingress-pipy-798c66c595-pw2w5   1/1     Running   0          3m2s

Step 3 : Join clusters into ClusterSet

For each of the 3 clusters, create yaml for cluster definition. We name them as cluster1.yaml cluster2.yaml cluster3.yaml. Content looks like:

apiVersion: flomesh.io/v1alpha1
kind: Cluster
metadata:
  name: cluster1
spec:
  gatewayHost: 10.0.0.8
  gatewayPort: 80
  kubeconfig: |+
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTnpBNU9EazBOVEF3SGhjTk1qSXhNakUwTURNME5ERXdXaGNOTXpJeE1qRXhNRE0wTkRFdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTnpBNU9EazBOVEF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTU2lLQWMrdmZ6amFxYklJT2pJLzgxdnE4MytKUmhDajB2dzdYVHlSaUMKMnRwT2FJeEsyd2lUazdvc0JQM2lLMytQSEdCNnBJTEVuZnZramlqUlE5T2VvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXcxWSswTE04eXJBNnlDcUVCK1J4CnRPVjZIaGN3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQUxQUk9XRFJ1ejJmenoxdG1oS1lwa1FmWXZZTlR2aDEKV3l5U2lsOHhLUVFNQWlCUkpxYy9tdC9wdTVEQXVVczc2N3VQdW1tS096eWR4aTBLcE81Wi8zQzQ1UT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
        server: https://127.0.0.1:6443
      name: default
    contexts:
    - context:
        cluster: default
        user: default
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: default
      user:
        client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRlZ0F3SUJBZ0lJRlhZeHF2cEpQRE13Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOamN3T1RnNU5EVXdNQjRYRFRJeU1USXhOREF6TkRReE1Gb1hEVEl6TVRJeApOREF6TkRReE1Gb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJIbHRiNVVrQ09nZ2JKSmUKbWpQRzJ3VE1EU3kxcmNvSG41Uk1ZSDF5Nk5ObytobzNmaitVTDA2Y0hNVTFhUnlBWERjS3NOMndVVDN6bkk3SAowNlo1cXZ1alNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUjJXYkpmRmQvcFFVdkZPd3V6OHg3cjdIei8rVEFLQmdncWhrak9QUVFEQWdOSkFEQkcKQWlFQXRMOElkTWFuV21sY0x4WlY2UmtrZytoL1c4ZmdIWVlTcFBwQUFXZndpWlFDSVFDZnQ3L2poeUFDRGFOMAptdFNjcWdkTFY4a2pCTlYvT1E2UlNwY2M2RlNnSmc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZGpDQ0FSMmdBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwClpXNTBMV05oUURFMk56QTVPRGswTlRBd0hoY05Nakl4TWpFME1ETTBOREV3V2hjTk16SXhNakV4TURNME5ERXcKV2pBak1TRXdId1lEVlFRRERCaHJNM010WTJ4cFpXNTBMV05oUURFMk56QTVPRGswTlRBd1dUQVRCZ2NxaGtqTwpQUUlCQmdncWhrak9QUU1CQndOQ0FBUzVBdG4rRkF5aGM0dVNWRDRucHRzcVFBak9FSVJITGNmY0dxajMvNk9UClZKTVNLRmxlbnluWERqTUtUbjhnd1c2b0VkWFV1c2sremZUQlo0Y2FjYkxKbzBJd1FEQU9CZ05WSFE4QkFmOEUKQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVkbG15WHhYZjZVRkx4VHNMcy9NZQo2K3g4Ly9rd0NnWUlLb1pJemowRUF3SURSd0F3UkFJZ1pqdVArWjFmQ2d2bndJNXJZSTNYMlhJYUlEcFNtNmVWCnl4cGtpUEp5RGVvQ0lERXQ0OXVGNFQrczBza3o1clhvcUJ0VWZ5Mmo4MVQxbmk5OFJoVitwOWxQCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
        client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSU1zUkgvc2lEa2k1Y2Q4QVNsRWFoMk9VU0NLa3cwa0Y4cm0zbko0ZmNsQkZvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFZVcxdmxTUUk2Q0Jza2w2YU04YmJCTXdOTExXdHlnZWZsRXhnZlhMbzAyajZHamQrUDVRdgpUcHdjeFRWcEhJQmNOd3F3M2JCUlBmT2Nqc2ZUcG5tcSt3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=

cluster is a CRD introduced by ErieCanal, it defines a 'Cluster' in a 'ClusterSet'. These concepts is defined in MCS KEP1645 ( https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api#terminology ). Some explanation here for this yaml:

  • line #4 defines cluster name, it should be unique for each k8s cluster inside a ClusterSet
  • line #6 is the cluster's ingress hostname or IP. ErieCanal installed a builtin ingress and default port is 80. Behinds the screen, traffic from a service consumer will be route by sidecar outbound proxy to other cluster's ingress, if service providers are in different cluster
  • line #7 is the ingress port, ErieCanal default to 80
  • line #9~27 comes from kubeconfig, on k3s host, it's ~/.kube/config . We need to change line #13 to cluster API server address

Double check all three cluster1.yaml cluster2.yaml cluster3.yaml are correct, then apply them on ErieCanal control plane -- it's cluster1. Like this :

root@caishu-test1:~# kubectl apply -f cluster1.yml 
cluster.flomesh.io/cluster1 created
root@caishu-test1:~# kubectl apply -f cluster2.yml 
cluster.flomesh.io/cluster2 created
root@caishu-test1:~# kubectl apply -f cluster3.yml 
cluster.flomesh.io/cluster3 created

And check & confirm all clusters are correctly managed(run this on cluster1 which is ErieCanal control plane):

root@caishu-test1:~# kubectl get clusters -A
NAME       REGION    ZONE      GROUP     GATEWAY HOST   GATEWAY PORT   MANAGED   MANAGED AGE   AGE
local      default   default   default                  80                                     112m
cluster1   default   default   default   10.0.0.7       80             True      72m           72m
cluster3   default   default   default   10.0.0.9       80             True      5m8s          5m8s
cluster2   default   default   default   10.0.0.8       80             True      5s            5s

Step 4 : Install osm-edge

osm-edge(https://github.com/flomesh-io/osm-edge) is a down stream project of osm(https://github.com/openservicemesh/osm), it's use pipy(https://github.com/flomesh-io/pipy) as sidecar proxy instead of Envoy proxy.

In this MCS demo, osm-edge provides east-west traffic management. osm-edge intercept service-consumer's outbound traffic and route it to local service, or service in other clusters.

Download osm-edge from osm-edge release page :

root@caishu-test1:~# wget https://github.com/flomesh-io/osm-edge/releases/download/v1.3.0-beta.3/osm-edge-v1.3.0-beta.3-linux-amd64.tar.gz
--2022-12-14 09:25:57--  https://github.com/flomesh-io/osm-edge/releases/download/v1.3.0-beta.3/osm-edge-v1.3.0-beta.3-linux-amd64.tar.gz
Resolving github.com (github.com)... 20.205.243.166
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/495229354/97482d2a-9ae0-43a2-a971-75a165c7ee49?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221214%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221214T092557Z&X-Amz-Expires=300&X-Amz-Signature=86a04d6b9a8a31cbdda376893c1a8c37efb0fbee57393c81bd15f46cc086bca6&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=495229354&response-content-disposition=attachment%3B%20filename%3Dosm-edge-v1.3.0-beta.3-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2022-12-14 09:25:58--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/495229354/97482d2a-9ae0-43a2-a971-75a165c7ee49?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221214%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221214T092557Z&X-Amz-Expires=300&X-Amz-Signature=86a04d6b9a8a31cbdda376893c1a8c37efb0fbee57393c81bd15f46cc086bca6&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=495229354&response-content-disposition=attachment%3B%20filename%3Dosm-edge-v1.3.0-beta.3-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 20169543 (19M) [application/octet-stream]
Saving to: ‘osm-edge-v1.3.0-beta.3-linux-amd64.tar.gz’

osm-edge-v1.3.0-beta.3-linux-amd64.tar.gz 100%[=====================================================================================>]  19.23M  3.68MB/s    in 9.0s    

2022-12-14 09:26:08 (2.14 MB/s) - ‘osm-edge-v1.3.0-beta.3-linux-amd64.tar.gz’ saved [20169543/20169543]

root@caishu-test1:~# tar xzvf osm-edge-v1.3.0-beta.3-linux-amd64.tar.gz 
linux-amd64/
linux-amd64/osm
linux-amd64/LICENSE
linux-amd64/README.md
root@caishu-test1:~# cp linux-amd64/osm /usr/local/bin/
root@caishu-test1:~# osm version
CLI Version: version.Info{Version:"v1.3.0-beta.3", GitCommit:"aaac88b1692cd643940817bdfc7c8bff9a58776c", BuildDate:"2022-12-14-00:45"}
Unable to find OSM control plane in the cluster

Then run osm install like this :

root@caishu-test1:~# export osm_namespace=osm-system
export osm_mesh_name=osm
root@caishu-test1:~# dns_svc_ip="$(kubectl get svc -n kube-system -l k8s-app=kube-dns -o jsonpath='{.items[0].spec.clusterIP}')"
root@caishu-test1:~# echo $dns_svc_ip
10.43.0.10
root@caishu-test1:~# echo $dns_svc_ip
10.43.0.10
root@caishu-test1:~# osm install \
    --mesh-name "$osm_mesh_name" \
    --osm-namespace "$osm_namespace" \
    --set=osm.certificateProvider.kind=tresor \
    --set=osm.image.registry=flomesh \
    --set=osm.image.tag=1.3.0-beta.3 \
    --set=osm.image.pullPolicy=Always \
    --set=osm.sidecarLogLevel=error \
    --set=osm.controllerLogLevel=warn \
    --timeout=900s \
    --set=osm.localDNSProxy.enable=true \
    --set=osm.localDNSProxy.primaryUpstreamDNSServerIPAddr="${dns_svc_ip}"
osm-preinstall[osm-preinstall-qqwrs] Done
osm-bootstrap[osm-bootstrap-5596d69dc4-k5npg] Done
osm-controller[osm-controller-96785fd87-789jk] Done
osm-injector[osm-injector-84cb55d5b5-rkzh9] Done
OSM installed successfully in namespace [osm-system] with mesh name [osm]

Check osm running status:

root@caishu-test1:~# kubectl get pods -n osm-system
NAME                             READY   STATUS    RESTARTS   AGE
osm-bootstrap-5596d69dc4-k5npg   1/1     Running   0          2m47s
osm-controller-96785fd87-789jk   2/2     Running   0          2m47s
osm-injector-84cb55d5b5-rkzh9    1/1     Running   0          2m47s

Pay attention here: we install osm-edge to all three k3s clusters, but in this demo, only cluster-1 is REQUIRED to run osm-edge. As the service-consumer is running on cluster-1.

Step 5 : Enable osm on NS (service-consumer)

Then we deploy the service-consumer onto cluster-1 and make it managed by osm-edge. The service-consumer is a curl running in pod.

root@caishu-test1:~# kubectl create namespace curl
namespace/curl created
root@caishu-test1:~# osm namespace add curl
Namespace [curl] successfully added to mesh [osm]
root@caishu-test1:~# kubectl apply -n curl -f https://raw.githubusercontent.com/cybwan/osm-edge-start-demo/main/demo/multi-cluster/curl.curl.yaml
serviceaccount/curl created
service/curl created
deployment.apps/curl created
root@caishu-test1:~# kubectl get pods -n curl
NAME                    READY   STATUS    RESTARTS   AGE
curl-5765c9666d-jskpm   2/2     Running   0          47s

We can see there are two containers running in the curl pod, one is curl, the other is pipy sidecar proxy injected by osm-edge.

Step 6 : Install services on cluster-2 and cluster-3

In this demo, the service will be consumed by curl is a simple helloworld REST service. It's a pipy process in pod. We install it onto both cluster-2 and cluster-3 to demonstrate a MCS service spanned over these two clusters. Let's make it.

On cluster-2:

root@caishu-test2:~# kubectl create namespace pipy
namespace/pipy created
root@caishu-test2:~# kubectl apply -n pipy -f https://raw.githubusercontent.com/cybwan/osm-edge-start-demo/main/demo/multi-cluster/pipy-ok-c2.pipy.yaml
deployment.apps/pipy-ok-c2 created
service/pipy-ok created
service/pipy-ok-c2 created
root@caishu-test2:~# kubectl get pods -o wide -n pipy
NAME                          READY   STATUS    RESTARTS   AGE    IP          NODE           NOMINATED NODE   READINESS GATES
pipy-ok-c2-58b5cbc9d4-22zl4   1/1     Running   0          2m2s   10.42.0.9   caishu-test2   <none>           <none>
root@caishu-test2:~# curl -i http://10.42.0.9:8080/c2/pipy
HTTP/1.1 200 OK
content-length: 24
connection: keep-alive

Hi, I am from Cluster2 !
root@caishu-test2:~# kubectl get svc -n pipy
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
pipy-ok      ClusterIP   10.43.45.71    <none>        8080/TCP   5m10s
pipy-ok-c2   ClusterIP   10.43.164.61   <none>        8080/TCP   5m10s

On cluster-3:

root@caishu-test3:~# kubectl create namespace pipy
namespace/pipy created
root@caishu-test3:~# kubectl apply -n pipy -f https://raw.githubusercontent.com/cybwan/osm-edge-start-demo/main/demo/multi-cluster/pipy-ok-c3.pipy.yaml
deployment.apps/pipy-ok-c3 created
service/pipy-ok created
service/pipy-ok-c3 created
root@caishu-test3:~# kubectl get pods -o wide -n pipy
NAME                         READY   STATUS    RESTARTS   AGE     IP           NODE           NOMINATED NODE   READINESS GATES
pipy-ok-c3-f8d7f5584-mpwwv   1/1     Running   0          2m59s   10.42.0.18   caishu-test3   <none>           <none>
root@caishu-test3:~# curl -i http://10.42.0.18:8080/c3/pipy
HTTP/1.1 200 OK
content-length: 24
connection: keep-alive

Hi, I am from Cluster3 !
root@caishu-test3:~# kubectl get svc -n pipy
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
pipy-ok      ClusterIP   10.43.64.252   <none>        8080/TCP   5m44s
pipy-ok-c3   ClusterIP   10.43.34.193   <none>        8080/TCP   5m44s

We can see service pipy-ok and pipy-ok-c2 on cluster-2, they are both backed by the same pod pipy-ok-c2-58b5cbc9d4-22zl4. When we directly access the pod port 8080, it returns Hi, I am from Cluster2 !. And service pipy-ok and pipy-ok-c3 on cluster-3, they are backed by the same pod pipy-ok-c3-f8d7f5584-mpwwv. It returns Hi, I am from Cluster3 !.

Step 7 : Export service on cluster-2 and cluster-3

MCS introduced a new concept ServiceExport, it's implemented in ErieCanal as CRD. When we create a ServiceExport, ErieCanal first create CRD on local cluster, then create corresponding ServiceImport on other clusters.

Create ServiceExport on cluster-2:

root@caishu-test2:~# cat <<EOF | kubectl apply -f -
apiVersion: flomesh.io/v1alpha1
kind: ServiceExport
metadata:
  namespace: pipy
  name: pipy-ok
spec:
  serviceAccountName: "*"
  rules:
    - portNumber: 8080
      path: "/c2/ok"
      pathType: Prefix
---
apiVersion: flomesh.io/v1alpha1
kind: ServiceExport
metadata:
  namespace: pipy
  name: pipy-ok-c2
spec:
  serviceAccountName: "*"
  rules:
    - portNumber: 8080
      path: "/c2/ok-c2"
      pathType: Prefix
EOF
serviceexport.flomesh.io/pipy-ok created
serviceexport.flomesh.io/pipy-ok-c2 created
root@caishu-test2:~# 
root@caishu-test2:~# kubectl get serviceexport -A
NAMESPACE   NAME         AGE
pipy        pipy-ok      25s
pipy        pipy-ok-c2   25s

Create ServiceExport on cluster-3:

root@caishu-test3:~# cat <<EOF | kubectl apply -f -
apiVersion: flomesh.io/v1alpha1
kind: ServiceExport
metadata:
  namespace: pipy
  name: pipy-ok
spec:
  serviceAccountName: "*"
  rules:
    - portNumber: 8080
      path: "/c3/ok"
      pathType: Prefix
---
apiVersion: flomesh.io/v1alpha1
kind: ServiceExport
metadata:
  namespace: pipy
  name: pipy-ok-c3
spec:
  serviceAccountName: "*"
  rules:
    - portNumber: 8080
      path: "/c3/ok-c3"
      pathType: Prefix
EOF
serviceexport.flomesh.io/pipy-ok created
serviceexport.flomesh.io/pipy-ok-c3 created
root@caishu-test3:~# kubectl get serviceexport -A
NAMESPACE   NAME         AGE
pipy        pipy-ok      50s
pipy        pipy-ok-c3   50s

Step 8 : Check ServiceImport

ServiceImport is another concept introduced by MCS KEP1645 ( https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api#terminology ). It's CRD in ErieCanal. In step #7, when we create ServiceExport, EireCanal automatically create corresponding ServiceImport on all three k3s clusters. It will also create corresponding Ingress on cluster-2 and cluster-3, as these two clusters are providing service in ClusterSet level.

Check cluster-1:

root@caishu-test1:~# kubectl get serviceimport -A
NAMESPACE   NAME         AGE
pipy        pipy-ok-c3   6m
pipy        pipy-ok      6m
pipy        pipy-ok-c2   101s

Pay attention here, as both cluster-2 and cluster-3 export the same name 'pipy' service, so there is only one ServiceImport named 'pipy'.

Check on cluster-2:

root@caishu-test2:~# kubectl get serviceexport -A
NAMESPACE   NAME         AGE
pipy        pipy-ok      25s
pipy        pipy-ok-c2   25s
root@caishu-test2:~# kubectl get serviceimport -A
NAMESPACE   NAME         AGE
pipy        pipy-ok-c3   5m30s
pipy        pipy-ok      5m30s
root@caishu-test2:~# kubectl get ingress -A
NAMESPACE    NAME                    CLASS   HOSTS   ADDRESS   PORTS   AGE
erie-canal   pipy-repo               pipy    *                 80      6h4m
pipy         svcexp-ing-pipy-ok      pipy    *                 80      8m1s
pipy         svcexp-ing-pipy-ok-c2   pipy    *                 80      8m1s
root@caishu-test2:~# kubectl describe ingress svcexp-ing-pipy-ok -n pipy
Name:             svcexp-ing-pipy-ok
Labels:           <none>
Namespace:        pipy
Address:          
Ingress Class:    pipy
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /c2/ok   pipy-ok:8080 (10.42.0.9:8080)
Annotations:  pipy.ingress.kubernetes.io/lb-type: RoundRobinLoadBalancer
Events:       <none>
root@caishu-test2:~# kubectl describe ingress svcexp-ing-pipy-ok-c2 -n pipy
Name:             svcexp-ing-pipy-ok-c2
Labels:           <none>
Namespace:        pipy
Address:          
Ingress Class:    pipy
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /c2/ok-c2   pipy-ok-c2:8080 (10.42.0.9:8080)
Annotations:  pipy.ingress.kubernetes.io/lb-type: RoundRobinLoadBalancer
Events:       <none>

Check on cluster-3:

root@caishu-test3:~# kubectl get serviceexport -A
NAMESPACE   NAME         AGE
pipy        pipy-ok      50s
pipy        pipy-ok-c3   50s
root@caishu-test3:~# kubectl get serviceimport -A
NAMESPACE   NAME         AGE
pipy        pipy-ok      83s
pipy        pipy-ok-c2   83s
root@caishu-test3:~# kubectl get ingress -A
NAMESPACE    NAME                    CLASS   HOSTS   ADDRESS   PORTS   AGE
erie-canal   pipy-repo               pipy    *                 80      4h37m
pipy         svcexp-ing-pipy-ok      pipy    *                 80      14m
pipy         svcexp-ing-pipy-ok-c3   pipy    *                 80      14m
root@caishu-test3:~# kubectl describe ingress svcexp-ing-pipy-ok -n pipy
Name:             svcexp-ing-pipy-ok
Labels:           <none>
Namespace:        pipy
Address:          
Ingress Class:    pipy
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /c3/ok   pipy-ok:8080 (10.42.0.18:8080)
Annotations:  pipy.ingress.kubernetes.io/lb-type: RoundRobinLoadBalancer
Events:       <none>
root@caishu-test3:~# kubectl describe ingress svcexp-ing-pipy-ok-c3 -n pipy
Name:             svcexp-ing-pipy-ok-c3
Labels:           <none>
Namespace:        pipy
Address:          
Ingress Class:    pipy
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /c3/ok-c3   pipy-ok-c3:8080 (10.42.0.18:8080)
Annotations:  pipy.ingress.kubernetes.io/lb-type: RoundRobinLoadBalancer
Events:       <none>

Step 9 : Create GTP(GlobalTrafficPolicy) on cluster-1

When the curl service trying to access 'pipy' MCS provided by cluster-2 and cluster-3, ErieCanal needs to know how to route the request to cluster-2 and cluster-3. This called GTP(GlobalTrafficPoliy), it's CRD in ErieCanal. In this demo, we will use ActiveActive mode which means traffic will be round robin load balanced to cluster-2 and cluster-3. Let's create the GTP on cluster-1 and check it:

root@caishu-test1:~# cat <<EOF | kubectl apply -f -
apiVersion: flomesh.io/v1alpha1
kind: GlobalTrafficPolicy
metadata:
  namespace: pipy
  name: pipy-ok
spec:
  lbType: ActiveActive
EOF
globaltrafficpolicy.flomesh.io/pipy-ok created
root@caishu-test1:~# kubectl get globaltrafficpolicies -A
NAMESPACE   NAME      AGE
pipy        pipy-ok   107s
root@caishu-test1:~# kubectl describe globaltrafficpolicies pipy-ok -n pipy
Name:         pipy-ok
Namespace:    pipy
Labels:       <none>
Annotations:  <none>
API Version:  flomesh.io/v1alpha1
Kind:         GlobalTrafficPolicy
Metadata:
  Creation Timestamp:  2022-12-14T13:03:44Z
  Generation:          1
  Managed Fields:
    API Version:  flomesh.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:lbType:
    Manager:         kubectl-client-side-apply
    Operation:       Update
    Time:            2022-12-14T13:03:44Z
  Resource Version:  19143
  UID:               e2f61df3-cce4-4bcf-85f5-2d3d9548d554
Spec:
  Lb Type:  ActiveActive
Events:     <none>

Final Step : Access MSC

Now everything is ok to access a MCS from cluster-1 : we trigger the 'curl' command to access the 'pipy' MCS service provided by cluster-2 and cluster-3. As the GTP is ActiveActive, the curl result will come from cluster-2 and cluster-3 one by one in load balance style. Let's try it. Do it on cluster-1:

root@caishu-test1:~# curl_client="$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')"
root@caishu-test1:~# echo $curl_client
curl-5765c9666d-jskpm
root@caishu-test1:~# kubectl exec "${curl_client}" -n curl -c curl -- curl -si http://pipy-ok.pipy:8080/
HTTP/1.1 200 OK
server: pipy
x-pipy-upstream-service-time: 3
content-length: 24
connection: keep-alive

Hi, I am from Cluster3 !
root@caishu-test1:~# kubectl exec "${curl_client}" -n curl -c curl -- curl -si http://pipy-ok.pipy:8080/
HTTP/1.1 200 OK
server: pipy
x-pipy-upstream-service-time: 2
content-length: 24
connection: keep-alive

Hi, I am from Cluster2 !

Recap

Let's quick recap this demo:

  • We have three k3s clusters, each cluster has just one node for demo purpose
  • cluster-1 runs ErieCanal MCS control plane
  • cluster-2 and cluster-3 runs ErieCanal MCS agent
  • Simple helloworld style REST service named 'pipy' runs on both cluster-2 and cluster-3, they are same-name services and they composed a MCS service 'pipy'
  • We create ServiceExport for service 'pipy' on both cluster-2 and cluster-3, ErieCanal will automatically create corresponding ServiceImport and Ingress on all three k3s clusters
  • We deploy a service on cluster-1 as the 'service consumer' to consumer 'pipy' MCS, it is curl service. And it managed by service mesh, in this demo it's 'osm-edge'
  • We created a GTP(GlobalTrafficPolicy) to tell how ErieCanal will route traffics from 'curl' pod to 'pipy' service providers. In this demo, it's ActiveActive policy, it's round robin load balance policy
  • ErieCanal will inject service mesh outbound routing rules in 'curl' sidecar proxy. As osm-edge sidecar proxy is pipy based, it's some PipyJS code generated by ErieCanal controller
  • We try to curl the 'pipy' MCS service inside 'curl' pod running on cluster-1, and we can see curl result comes from cluster-2 and cluster-3 in round robin load balance style

That's all. Any questions and ideas are welcome on slack channel, it's 'openservicemesh' on Slack~ Enjoy it.