Skip to content

Commit

Permalink
Migrate L2 announcements and LB-IPAM to cilium (#505)
Browse files Browse the repository at this point in the history
No need to use MetalLB since Cilium can do this as well.
  • Loading branch information
aaronmondal committed Dec 18, 2023
1 parent ef74f9c commit df6f5b9
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 31 deletions.
42 changes: 18 additions & 24 deletions deployment-examples/kubernetes/00_infra.sh
Expand Up @@ -6,6 +6,8 @@
#
# See https://kind.sigs.k8s.io/docs/user/local-registry/.

set -xeuo pipefail

reg_name='kind-registry'
reg_port='5001'
if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then
Expand Down Expand Up @@ -52,6 +54,7 @@ fi
# Advertise the registry location.

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ConfigMap
metadata:
Expand All @@ -77,50 +80,41 @@ kubectl wait --for condition=Established crd/referencegrants.gateway.networking.
# Start cilium.

helm repo add cilium https://helm.cilium.io

helm repo update cilium
helm upgrade \
--install cilium cilium/cilium \
--version 1.15.0-pre.3 \
--version 1.14.5 \
--namespace kube-system \
--set k8sServiceHost=kind-control-plane \
--set k8sServicePort=6443 \
--set kubeProxyReplacement=strict \
--set gatewayAPI.enabled=true \
--set l2announcements.enabled=true \
--wait

# Set up MetalLB. Kind's nodes are containers running on the local docker
# network. We reuse that network for LB-IPAM so that LoadBalancers are available
# via "real" local IPs.
# Kind's nodes are containers running on the local docker network. We reuse that
# network for LB-IPAM so that LoadBalancers are available via "real" local IPs.

KIND_NET_CIDR=$(docker network inspect kind -f '{{(index .IPAM.Config 0).Subnet}}')
METALLB_IP_START=$(echo ${KIND_NET_CIDR} | sed "s@0.0/16@255.200@")
METALLB_IP_END=$(echo ${KIND_NET_CIDR} | sed "s@0.0/16@255.250@")
METALLB_IP_RANGE="${METALLB_IP_START}-${METALLB_IP_END}"

helm install --namespace metallb-system --create-namespace \
--repo https://metallb.github.io/metallb metallb metallb \
--version 0.13.12 \
--wait
CILIUM_IP_CIDR=$(echo ${KIND_NET_CIDR} | sed "s@0.0/16@255.0/28@")

cat <<EOF | kubectl apply -f -
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
apiVersion: cilium.io/v2alpha1
kind: CiliumL2AnnouncementPolicy
metadata:
name: l2-ip
namespace: metallb-system
name: l2-announcements
spec:
ipAddressPools:
- default-pool
externalIPs: true
loadBalancerIPs: true
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
apiVersion: cilium.io/v2alpha1
kind: CiliumLoadBalancerIPPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- ${METALLB_IP_RANGE}
cidrs:
- cidr: ${CILIUM_IP_CIDR}
EOF

# At this point we have a similar setup to the one that we'd get with a cloud
Expand Down
6 changes: 5 additions & 1 deletion deployment-examples/kubernetes/01_operations.sh
Expand Up @@ -3,7 +3,11 @@
# TODO(aaronmondal): Add Grafana, OpenTelemetry and the various other standard
# deployments one would expect in a cluster.

kubectl apply -f gateway.yaml
set -xeuo pipefail

SRC_ROOT=$(git rev-parse --show-toplevel)

kubectl apply -f ${SRC_ROOT}/deployment-examples/kubernetes/gateway.yaml

IMAGE_TAG=$(nix eval .#image.imageTag --raw)
Expand Down
12 changes: 6 additions & 6 deletions deployment-examples/kubernetes/README.md
Expand Up @@ -3,8 +3,8 @@
This deployment sets up a 3-container deployment with separate CAS, scheduler
and worker. Don't use this example deployment in production. It's insecure.

In this example we're using `kind` to set up the cluster and `cilium` with
`metallb` to provide a `LoadBalancer` and `GatewayController`.
In this example we're using `kind` to set up the cluster `cilium` to provide a
`LoadBalancer` and `GatewayController`.

First set up a local development cluster:

Expand Down Expand Up @@ -41,8 +41,8 @@ echo "Scheduler IP: $SCHEDULER"

# Prints something like:
#
# Cache IP: 172.20.255.200
# Scheduler IP: 172.20.255.201
# Cache IP: 172.20.255.4
# Scheduler IP: 172.20.255.5
```

You can now pass these IPs to your bazel invocation to use the remote cache and
Expand All @@ -65,8 +65,8 @@ bazel test \
> # .bazelrc.user
> build --config=lre
> build --remote_instance_name=main
> build --remote_cache=grpc://172.20.255.200:50051
> build --remote_executor=grpc://172.20.255.201:50052
> build --remote_cache=grpc://172.20.255.4:50051
> build --remote_executor=grpc://172.20.255.5:50052
> ```
When you're done testing, delete the cluster:
Expand Down

0 comments on commit df6f5b9

Please sign in to comment.