Building Calico with Kubernetes
Calico enables networking and network policy in Kubernetes clusters across the cloud. The instructions provided you the steps to integrate Calico with Kubernetes on Linux on IBM Z for following distribution:
- RHEL (7.8, 7.9, 8.6, 8.8, 8.9, 9.0, 9.2, 9.3)
- SLES (12 SP5, 15 SP5)
- Ubuntu (20.04, 22.04, 23.10)
General Notes:
-
When following the steps below please use a standard permission user unless otherwise specified.
-
A directory
/<source_root>/
will be referred to in these instructions, this is a temporary writable directory anywhere you'd like to place it. -
Following build instructions were tested using Kubernetes version 1.28
Instructions for building the basic Calico components, which includes calico/ctl
, calico/node
and calico/kube-controllers
can be found here
export PATCH_URL=https://raw.githubusercontent.com/linux-on-ibm-z/scripts/master/Calico/3.27.0/patch
-
This builds a docker image
tigera/operator
that will be used to manage the lifecycle of a Calico installation on Kubernetes.mkdir -p $GOPATH/src/github.com/tigera/operator git clone -b v1.32.3 https://github.com/tigera/operator $GOPATH/src/github.com/tigera/operator cd $GOPATH/src/github.com/tigera/operator curl -s $PATCH_URL/operator.patch | git apply - make image # The built image needs to be tagged with the version number to correctly work with kubernetes docker tag tigera/operator:latest-s390x quay.io/tigera/operator:v1.32.3
cd $GOPATH/src/github.com/projectcalico/calico/pod2daemon/
make image
docker tag calico/node-driver-registrar:latest-s390x calico/node-driver-registrar:v3.27.0
-
Verify the following images are on the system:
REPOSITORY TAG calico/kube-controllers latest-s390x calico/node-driver-registrar latest-s390x tigera/operator latest-s390x calico/felix latest-s390x calico/node latest-s390x calico/typha latest-s390x calico/dikastes latest-s390x calico/flannel-migration-controller latest-s390x calico/apiserver latest-s390x calico/cni latest-s390x calico/ctl latest-s390x calico/csi latest-s390x calico/pod2daemon-flexvol latest-s390x calico/pod2daemon latest-s390x calico/bird latest-s390x calico/kube-controllers v3.27.0 calico/node-driver-registrar v3.27.0 calico/felix v3.27.0 calico/node v3.27.0 calico/typha v3.27.0 calico/dikastes v3.27.0 calico/flannel-migration-controller v3.27.0 calico/apiserver v3.27.0 calico/cni v3.27.0 calico/ctl v3.27.0 calico/pod2daemon-flexvol v3.27.0 calico/pod2daemon v3.27.0
Once you have all necessary components built on Z systems, you can
-
Configure and run your Kubernetes following here
-
Install Calico as per instructions; ensure
tigera-opeartor.yaml
andcustom-resources.yaml
have correct values reflecting the operational cluster:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
- Following
pods
are expected following a successful deployment:
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-7c8c4668c-c98f6 1/1 Running 0 7s
calico-apiserver calico-apiserver-7c8c4668c-ppn26 1/1 Running 0 7s
calico-system calico-kube-controllers-964bb7d7f-jwp7j 1/1 Running 0 43s
calico-system calico-node-lgnhs 1/1 Running 0 43s
calico-system calico-typha-66955454c8-cngld 1/1 Running 0 44s
calico-system csi-node-driver-gdzb2 2/2 Running 0 43s
kube-system coredns-5d78c9869d-trpj2 1/1 Running 0 3m22s
kube-system coredns-5d78c9869d-wl6xh 1/1 Running 0 3m22s
kube-system etcd-c43192v1.fyre.ibm.com 1/1 Running 0 3m33s
kube-system kube-apiserver-c43192v1.fyre.ibm.com 1/1 Running 0 3m33s
kube-system kube-controller-manager-c43192v1.fyre.ibm.com 1/1 Running 0 3m33s
kube-system kube-proxy-w2gch 1/1 Running 0 3m22s
kube-system kube-scheduler-c43192v1.fyre.ibm.com 1/1 Running 0 3m33s
tigera-operator tigera-operator-7ff8dc855-dh5qx 1/1 Running 0 2m17s
Step 4: (Optional) Use Calico network policy on top of flannel networking - Flannel
-
Ensure you have a Calico compatible Kubernetes cluster
-
Download and install flannel networking manifest for the Kubernetes API datastore
curl https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/canal.yaml -O
kubectl apply -f canal.yaml
- Following
pods
are expected upon successful deployment:
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-7c8c4668c-c98f6 1/1 Running 0 2m44s
calico-apiserver calico-apiserver-7c8c4668c-ppn26 1/1 Running 0 2m44s
calico-system calico-kube-controllers-964bb7d7f-jwp7j 1/1 Running 0 3m20s
calico-system calico-node-lgnhs 1/1 Running 0 3m20s
calico-system calico-typha-66955454c8-cngld 1/1 Running 0 3m21s
calico-system csi-node-driver-gdzb2 2/2 Running 0 3m20s
kube-system calico-kube-controllers-867bf4f5b5-ktnjg 1/1 Running 0 9s
kube-system canal-qgrtf 2/2 Running 1 (7s ago) 10s
kube-system coredns-5d78c9869d-trpj2 1/1 Running 0 5m59s
kube-system coredns-5d78c9869d-wl6xh 1/1 Running 0 5m59s
kube-system etcd-c43192v1.fyre.ibm.com 1/1 Running 0 6m10s
kube-system kube-apiserver-c43192v1.fyre.ibm.com 1/1 Running 0 6m10s
kube-system kube-controller-manager-c43192v1.fyre.ibm.com 1/1 Running 0 6m10s
kube-system kube-proxy-w2gch 1/1 Running 0 5m59s
kube-system kube-scheduler-c43192v1.fyre.ibm.com 1/1 Running 0 6m10s
tigera-operator tigera-operator-7ff8dc855-dh5qx 1/1 Running 0 4m54s
The information provided in this article is accurate at the time of writing, but on-going development in the open-source projects involved may make the information incorrect or obsolete. Please open issue or contact us on IBM Z Community if you have any questions or feedback.