Skip to content

Latest commit

 

History

History
233 lines (166 loc) · 12.1 KB

README.md

File metadata and controls

233 lines (166 loc) · 12.1 KB

STUNner demo: Video-conferencing with LiveKit

This document guides you through the installation of LiveKit into Kubernetes, when it is used together with the STUNner WebRTC media gateway.

In this demo you will learn to:

  • integrate a typical WebRTC application with STUNner,
  • obtain a valid TLS certificate to secure the signaling plane,
  • deploy the LiveKit server into Kubernetes, and
  • configure STUNner to expose LiveKit to clients.

Prerequisites

The below installation instructions require an operational cluster running a supported version of Kubernetes (>1.22). Most hosted or private Kubernetes cluster services will work, but make sure that the cluster comes with a functional load-balancer integration (all major hosted Kubernetes services should support this). Otherwise, STUNner will not be able to allocate a public IP address for clients to reach your WebRTC infra. As a regrettable exception, Minikube is unfortunately not supported for this demo. The reason is that Let's Encrypt certificate issuance is not available with nip.io; late on you will learn more about why this is crucial above.

Setup

The recommended way to install LiveKit into Kubernetes is deploying the media servers into the host-network namespace of the Kubernetes nodes (hostNetwork: true). This deployment model, however, comes with a set of uncanny operational limitations and security concerns. Using STUNner, however, media servers can be deployed into ordinary Kubernetes pods and run over a private IP network, like any "normal" Kubernetes workload.

The figure below shows LiveKit deployed into regular Kubernetes pods behind STUNner without the host-networking hack. Here, LiveKit is deployed behind STUNner in the media-plane deployment model, so that STUNner acts as a "local" STUN/TURN server for LiveKit, saving the overhead of using public a 3rd party STUN/TURN server for NAT traversal.

STUNner LiveKit integration deployment architecture

In this tutorial we deploy a video room example using LiveKit's React SDK, the LiveKit server for media exchange, a Kubernetes Ingress gateway to secure signaling connections and handle TLS, and STUNner as a media gateway to expose the LiveKit server pool to clients.

Installation

Let's start with a disclaimer. The LiveKit client example browser must work over a secure HTTPS connection, because getUserMedia is available only in secure contexts. This implies that the client-server signaling connection must be secure too. Unfortunately, self-signed TLS certs will not work, so we have to come up with a way to provide our clients with a valid TLS cert. This will have the unfortunate consequence that the majority of the below installation guide will be about securing client connections to LiveKit over TLS; as it turns out, once HTTPS is correctly working integrating LiveKit with STUNner is very simple.

In the below example, STUNner will be installed into the identically named namespace, while LiveKit and the Ingress gateway will live in the default namespace.

TLS certificates

As mentioned above, the LiveKit server will need a valid TLS cert, which means it must run behind an existing DNS domain name backed by a CA signed TLS certificate. This is simple if you have your own domain, but if you don't then nip.io provides a dead simple wildcard DNS for any IP address. We will use this to "own a domain" and obtain a CA signed certificate for LiveKit. This will allow us to point the domain name client-<ingress-IP>.nip.io to an ingress HTTP gateway in our Kubernetes cluster, which will then use some automation (namely, cert-manager) to obtain a valid CA signed cert.

Note that public wildcard DNS domains might run into rate limiting issues. If this occurs you can try alternative services instead of nip.io.

Ingress

The first step of obtaining a valid cert is to install a Kubernetes Ingress: this will be used during the validation of our certificates and to terminate client TLS encrypted contexts.

Install an ingress controller into your cluster. We used the official nginx ingress, but this is not required.

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx

Wait until Kubernetes assigns an external IP to the Ingress.

until [ -n "$(kubectl get service ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" ]; do sleep 1; done

Store the Ingress IP address Kubernetes assigned to our Ingress; this will be needed later when we configure the validation pipeline for our TLS certs.

kubectl get service ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
export INGRESSIP=$(kubectl get service ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESSIP=$(echo $INGRESSIP | sed 's/\./-/g')

Cert manager

We use the official cert-manager to automate TLS certificate management.

Add the Helm repository, which contains the cert-manager Helm chart, and install the charts:

helm repo add cert-manager https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager \
    --create-namespace --set global.leaderElection.namespace=cert-manager \
    --set installCRDs=true --timeout 600s

At this point we have all the necessary boilerplate set up to automate TLS issuance for LiveKit.

STUNner

Now comes the fun part. The simplest way to run this demo is to clone the STUNner git repository and deploy the manifest packaged with STUNner.

Install the STUNner gateway operator and STUNner via Helm:

Legacy mode:

helm repo add stunner https://l7mp.io/stunner
helm repo update
helm install stunner-gateway-operator stunner/stunner-gateway-operator --create-namespace --namespace=stunner-system --set stunnerGatewayOperator.dataplane.mode=legacy
helm install stunner stunner/stunner --create-namespace --namespace=stunner

Managed mode:

helm repo add stunner https://l7mp.io/stunner
helm repo update
helm install stunner-gateway-operator stunner/stunner-gateway-operator --create-namespace --namespace=stunner-system

Configure STUNner to act as a STUN/TURN server to clients, and route all received media to the LiveKit server pods.

git clone https://github.com/l7mp/stunner
cd stunner
kubectl apply -f docs/examples/livekit/livekit-call-stunner.yaml

The relevant parts here are the STUNner Gateway definition, which exposes the STUNner STUN/TURN server over UDP:3478 to the Internet, and the UDPRoute definition, which takes care of routing media to the pods running the LiveKit service.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: udp-gateway
  namespace: stunner
spec:
  gatewayClassName: stunner-gatewayclass
  listeners:
    - name: udp-listener
      port: 3478
      protocol: UDP
---
apiVersion: stunner.l7mp.io/v1
kind: UDPRoute
metadata:
  name: livekit-media-plane
  namespace: stunner
spec:
  parentRefs:
    - name: udp-gateway
  rules:
    - backendRefs:
        - name: livekit-server

Once the Gateway resource is installed into Kubernetes, STUNner will create a Kubernetes LoadBalancer for the Gateway to expose the TURN server on UDP:3478 to clients. It can take up to a minute for Kubernetes to allocate a public external IP for the service.

Wait until Kubernetes assigns an external IP and store the external IP assigned by Kubernetes to STUNner in an environment variable for later use.

until [ -n "$(kubectl get svc udp-gateway -n stunner -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" ]; do sleep 1; done
export STUNNERIP=$(kubectl get service udp-gateway -n stunner -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

LiveKit

The crucial step of integrating any WebRTC media server with STUNner is to ensure that the server instructs the clients to use STUNner as the STUN/TURN server. In order to achieve this, first we path the public IP address of the STUNner STUN/TURN server we have learned above into our LiveKit deployment manifest:

sed -i "s/stunner_ip/$STUNNERIP/g" docs/examples/livekit/livekit-server.yaml

Assuming that Kubernetes assigns the IP address 1.2.3.4 to STUNner (i.e., STUNNERIP=1.2.3.4), the relevant part of the LiveKit config would be something like the below:

...
rtc:
  ...
  turn_servers:
  - host: 1.2.3.4
    username: user-1
    credential: pass-1
    protocol: udp
    port: 3478

This will make sure that LiveKit is started with STUNner as the STUN/TURN server. If unsure about the STUNner settings to use, you can always use the handy stunnerctl CLI tool to dump the running STUNner configuration.

stunnerctl -n stunner config udp-gateway
Gateway: stunner/udp-gateway (loglevel: "all:INFO")
Authentication type: static, username/password: user-1/pass-1
Listeners:
  - Name: stunner/udp-gateway/udp-listener
    Protocol: TURN-UDP
    Public address:port: 34.118.88.91:3478
    Routes: [stunner/iperf-server]
    Endpoints: [10.76.1.4, 10.80.4.47]

Note that LiveKit itself will not use STUNner (that would amount to a less efficient symmetric ICE mode); with the above configuration we are just telling LiveKit to instruct its clients to use STUNner to reach the LiveKit media servers.

We also need the Ingress external IP address we have stored previously: this will make sure that the TLS certificate created by cert-manager will be bound to the proper nip.io domain and IP address.

sed -i "s/ingressserviceip/$INGRESSIP/g" docs/examples/livekit/livekit-server.yaml

Finally, fire up LiveKit.

kubectl apply -f docs/examples/livekit/livekit-server.yaml

The demo installation bundle includes a lot of resources to deploy LiveKit:

  • a LiveKit-server,
  • a web server serving the landing page using LiveKit react example
  • a cluster issuer for the TLS certificates,
  • an Ingress resource to terminate the secure connections between your browser and the Kubernetes cluster.

Wait until all pods become operational and jump right into testing!

Test

After installing everything, execute the following command to retrieve the URL of your fresh LiveKit demo app:

echo client-$INGRESSIP.nip.io

Copy the URL into your browser, and now you should be greeted with the LiveKit Video title. On the landing page you must set the LiveKit URL, which is the LiveKit server's IP address, or in our case the other subdomain we set earlier in the Ingress manifest.

Executing the following command shall get you the required URL:

echo wss://mediaserver-$INGRESSIP.nip.io:443

To obtain a valid token, install the livekit-cli and issue the below command.

livekit-cli create-token \
    --api-key access_token --api-secret secretsecretsecretsecretsecretsecret \
    --join --room room --identity user1 \
    --valid-for 24h

Copy the access token into the token field and hit the Connect button. If everything is set up correctly, you should be able to connect to a room. If you repeat the procedure in a separate browser tab you can enjoy a nice video-conferencing session with yourself, with the twist that all media between the browser tabs is flowing through STUNner and the LiveKit-server deployed in you Kubernetes cluster.