Skip to content

Deploy OWT in Kubernetes without exposing public IP address and udp port range for webrtc

Qiujiao edited this page Feb 24, 2021 · 3 revisions

There are some issues with this method, we are working on the solution, will update it in the future

When deploying OWT in Kubernetes, you may find it's hard to expose public IP address and so many UDP port range for webrtc agent. In this article we will introduce how to use a stun server to avoid configuring webrtc agent in Kubernetes.

stun server

We will use coturn to implement the stun/turn server. Suppose the public IP address for the server is 10.234.150.128, follow steps below to install coturn:

docker run -itd --name=coturn --net=host ubuntu:18.04 bash
docker exec -it coturn bash
apt update -y
apt install coturn vim -y
service coturn stop
cp /etc/turnserver.conf /etc/turnserver.conf.original

Then uncomment following items and configure them in /etc/turnserver.conf:

listening-port=3478
external-ip=10.234.150.128

Start coturn service with following command:

service coturn start

Please make sure to install the stun server on a device other than the Kubernetes environment, otherwise there may be issues due to network optimization by Kubernetes network rules.

Deploy OWT in Kubernetes

We will take the all-in-one OWT image as an example, you can separate OWT modules to independent images for a better deployment of different purposes.

Configure webrtc stun server and port

Add a script startowt.sh in OWT docker image, in this script, it will accept stun server ip/port, portal public ip/port as parameter and replace them in webrtc_agent/agent.toml and portal/portal.toml. We can pass the stun server ip/port and portal public ip/port in owt deployment yaml file as:

    spec:
      containers:
      - name: owt
        image: owt-run:5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          hostPort: 8080
        - containerPort: 3004
          hostPort: 3004
        command: ["/home/startowt.sh"]
        args:
        - --stunport=3478
        - --stunip=10.234.150.128
        - --portalip=portal_public_IP_address
        - --portalport=8080

Then in webrtc_agent/agent.toml, stunport and stunserver items should be replaced by startowt.sh:

stunport = 3478 #default: 0
stunserver = "10.234.150.128" #default: ""

in portal/portal.toml, ip_address and port items should be replaced by startowt.sh:

ip_address = "portal_public_IP_address" #default: ""
# Port that the socket.io server listens at.
port = 8080 #default: 8080

Expose TCP ports

There are public ports or IP requirement for following 2 modules:
portal: signaling server public IP and port
apps: webserver ports(http or https)
We can expose these ports with a owtsvc.yaml as following:

apiVersion: v1
kind: Service
metadata:
  name: owt
  labels:
    k8s-app: owt
spec:
  selector:
    k8s-app: owt
  type: NodePort
  ports:
  - port: 8080
    name: socketio
    protocol: TCP
    targetPort: 8080
    nodePort: 8080
  - port: 3004
    name: https
    protocol: TCP
    targetPort: 3004
    nodePort: 3004

Then launch OWT deployment and service with kubectl, you can open https://webserver-ip:3004 to access OWT deployed in Kubernetes.
Note: There is a bug that the webrtc negotiation process will fail at the first login, after refreshing page Chrome will work well. We will fix the bug in the future.