You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Having difficulties running STUNner in headless mode on a single-node kURL environment. Given that it's running on a single node, there is no load balancer controller responding to request for new Service with Type: LoadBalancer. Hence, the Gateway is defined as ClusterIP and the UDP:3478 port gets exposed externally via Ingress NGINX. The nginx ingress controller is configured with hostNetwork: true listening on port 80, 433, and 3478.
Establishing connection to STUNner on UDP:3478 is working as expected, but it then stuck in an error loop of Failed to handle datagram with no allocation found.. any suggestion on how to further debug/fix the issue?
I'm afraid this has to do something with the weird way in which NGINX forwards UDP packets. It seems stunnerd gets every single packet from a different source port:
15:12:09.005952 server.go:39: turn DEBUG: Received 20 bytes of udp from 10.32.0.1:53688 on [::]:3478
15:12:09.006163 server.go:39: turn DEBUG: Received 20 bytes of udp from 10.32.0.1:52114 on [::]:3478
15:12:09.056479 server.go:39: turn DEBUG: Received 28 bytes of udp from 10.32.0.1:60373 on [::]:3478
15:12:09.056876 server.go:39: turn DEBUG: Received 28 bytes of udp from 10.32.0.1:52089 on [::]:3478
15:12:09.072498 server.go:39: turn DEBUG: Received 120 bytes of udp from 10.32.0.1:58237 on [::]:3478
15:12:09.073417 server.go:39: turn DEBUG: Received 120 bytes of udp from 10.32.0.1:34128 on [::]:3478
15:12:09.293926 server.go:39: turn DEBUG: Received 20 bytes of udp from 10.32.0.1:51530 on [::]:3478
15:12:09.324390 server.go:39: turn DEBUG: Received 28 bytes of udp from 10.32.0.1:44682 on [::]:3478
15:12:09.345078 server.go:39: turn DEBUG: Received 120 bytes of udp from 10.32.0.1:47189 on [::]:3478
15:12:09.365122 server.go:39: turn DEBUG: Received 124 bytes of udp from 10.32.0.1:37462 on [::]:3478
15:12:09.365895 server.go:39: turn DEBUG: Received 124 bytes of udp from 10.32.0.1:41700 on [::]:3478
This makes me believe that NGINX somehow seems to think it has to create a new UDP proxy connection for every client packet. The first time stunnerd receives a TURN packet that assumes prior state (that's the CreatePermission request) it fails as there is no allocation for that source port (the corresponding CreateAllocation that would have created that state came from a different port from NGINX).
My advice would be to either fix NGINX (maybe it runs with an extra small UDP conntrack TTL?? seems improbable) or remove NGINX all together from the loop. For instance, you can deploy stunnerd into the host-network namespace and use the static TURN server URI turn:<node-public-IP>:3478. This may, however, create port clashes between NGINX and stunnerd, which is not a problem if stunnerd uses only UDP and NGINX TCP.
You can also choose to expose stunnerd on a NodePort by setting the stunner.l7mp.io/service-type: NodePort annotation on your Gateway (this would be my pick). Unfortunately, you cannot request a particular nodeport via the Gateway API so the public port will be dynamic, but then you can use STUNner's auth-service to generate a dynamic ICE server config for your clients.
Having difficulties running STUNner in headless mode on a single-node kURL environment. Given that it's running on a single node, there is no load balancer controller responding to request for new Service with
Type: LoadBalancer
. Hence, the Gateway is defined as ClusterIP and the UDP:3478 port gets exposed externally via Ingress NGINX. The nginx ingress controller is configured withhostNetwork: true
listening on port 80, 433, and 3478.Establishing connection to STUNner on UDP:3478 is working as expected, but it then stuck in an error loop of
Failed to handle datagram
withno allocation found
.. any suggestion on how to further debug/fix the issue?STUNner is configured as follow:
The server logs are:
The text was updated successfully, but these errors were encountered: