Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UDP broadcast fails between pods #1063

Closed
tibcoplord opened this issue Nov 6, 2019 · 7 comments
Closed

UDP broadcast fails between pods #1063

tibcoplord opened this issue Nov 6, 2019 · 7 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@tibcoplord
Copy link

What happened:

After starting a kind cluster and creating two StatefulSet pods, I found UDP broadcast doesn't work between the pods.

What you expected to happen:

UDP broadcast to work.

How to reproduce it (as minimally and precisely as possible):

Start 2 pods -

$ kind create cluster
$ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
$ kubectl apply -f centos.yaml

where centos.yaml is -

apiVersion: v1
kind: Service
metadata:
  name: centos
  labels:
    app: centos
spec:
  selector:
    app: centos
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: centos
spec:
  selector:
    matchLabels:
      app: centos
  serviceName: centos
  replicas: 2
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: centos
    spec:
      containers:
        - name: centos
          image: centos/tools:latest
          tty: true

On the first pod listen for UDP packets -

$ kubectl exec centos-0 -- nc -l -u -v 54321
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Listening on :::54321
Ncat: Listening on 0.0.0.0:54321

On the second pod send a UDP broadcast -

$  echo 123 | kubectl exec -it centos-1 -- nc -u -v 255.255.255.255 54321
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 255.255.255.255:54321.
Ncat: 4 bytes sent, 0 bytes received in 0.02 seconds.

Note that nothing is received on the first pod.

Anything else we need to know?:

The same test does work in minikube. I see -

$ kubectl exec centos-0 -- nc -l -u -v 54321
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Listening on :::54321
Ncat: Listening on 0.0.0.0:54321
123
Ncat: Connection from 172.17.0.7.

Environment:

  • kind version: (use kind version):

v0.5.1

  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-20T18:57:36Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version: (use docker info):
Client:
 Debug Mode: false

Server:
 Containers: 20
  Running: 2
  Paused: 0
  Stopped: 18
 Images: 73
 Server Version: 19.03.2
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.9.184-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 11.71GiB
 Name: docker-desktop
 ID: PWD2:CXCM:265O:QLC4:UG2C:HCVP:53AR:CR7U:RJNV:3WNP:CNMO:47ZC
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 46
  Goroutines: 58
  System Time: 2019-11-06T12:12:49.5405556Z
  EventsListeners: 2
 HTTP Proxy: gateway.docker.internal:3128
 HTTPS Proxy: gateway.docker.internal:3129
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  172.30.29.56:5000
  na-bos-artifacts.na.tibco.com:2001
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine
  • OS (e.g. from /etc/os-release):

MacOS 10.15

@tibcoplord tibcoplord added the kind/bug Categorizes issue or PR as related to a bug. label Nov 6, 2019
@BenTheElder
Copy link
Member

will try to take a look at this tomorrow but also
/help

@k8s-ci-robot
Copy link
Contributor

@BenTheElder:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

will try to take a look at this tomorrow but also
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Nov 7, 2019
@aojea
Copy link
Contributor

aojea commented Nov 8, 2019

does UDP broadcast work for a normal Kubernetes cluster?
I assume that it works in a single node cluster like minikube, but in a normal cluster there are multiple external factors, like the external network or node OS security that may filter those packets.
Definitively something interesting to check, if somebody want to help just has to tcpdump on the different points and see where the packet is dropped

@BenTheElder
Copy link
Member

Running this same test:
kubectl apply -f ~/test.yaml

$ kubectl exec centos-0 -- nc -l -u -v 54321
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Listening on :::54321
Ncat: Listening on 0.0.0.0:54321
$ echo 123 | kubectl exec -it centos-1 -- nc -u -v 255.255.255.255 54321
Unable to use a TTY - input is not a terminal or the right kind of file
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 255.255.255.255:54321.
Ncat: 4 bytes sent, 0 bytes received in 0.01 seconds.

Also does not work on a newly created GKE cluster with standard options.

@BenTheElder
Copy link
Member

the pods in fact schedule on different nodes. I don't think it's expected in Kubernetes's network model that you can broadcast between pods like this.

@BenTheElder BenTheElder added kind/support Categorizes issue or PR as a support question. and removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. labels Nov 11, 2019
@BenTheElder
Copy link
Member

You could try using hostNetwork: true to run in the host network namespace and bypass all pod networking, but I'm not sure that's a good idea.

I remembered there was a slack-thread in #kind related to this issue and looked back:

The application uses UDP port 54321 for discovery purposes - this works on both docker-desktop and Minikube (where the pods are in the same node). However I've found that UDP port 54321 seems to be blocked between pods in kind ( although its somewhat hard to tell ).

Right, so this could probably be done with hostNetwork, but I don't think this is the right approach on Kubernetes. Services are a native object and have native discovery mechanisms, or sometimes users use something like istio / linkerd.

https://kubernetes.io/docs/concepts/services-networking/service/ is probably the right starting point for this.

@BenTheElder
Copy link
Member

I'm going to close this out as not a bug in kind, but you may find more help in #kubernetes-users or maybe #sig-networking in the Kubernetes slack with the general problem of service discovery in Kubernetes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants