Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behaviour using "OnlyLocal" annotation on NodePort with 1.6.1 and Weave-Net 1.6 #44963

Closed
tomte76 opened this issue Apr 26, 2017 · 28 comments
Assignees
Labels
area/kube-proxy sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@tomte76
Copy link

tomte76 commented Apr 26, 2017

  • Cluster is runnning on on-premises Openstack
  • Custer consists of 6 VMs
  • 1 master and 5 slaves
  • OS is Debian Jessie x64 (8.7)
  • Installation on top of OS is mostly done with ansible
  • Docker version is

ii docker-engine 1.12.6-0~debian-jessie amd64

  • Kubernetes is
ii  kubeadm 1.6.1-00 amd64
ii  kubectl 1.6.1-00 amd64
ii  kubelet 1.6.1-00 amd64
ii  kubernetes-cni 0.5.1-00 amd64
  • Installation as follows
  • Init the master

kubeadm init

  • wait for the procecss to complete
  • init and join the slaves

ansible -become -i ansible-hosts all -a "kubeadm join --token=<token from init> 192.168.141.24"

  • installing weave-net

kubectl apply -f https://git.io/weave-kube-1.6

  • waiting for the cluster to complete
  • completed cluster looks like this
$ kubectl get nodes -o wide
NAME                  STATUS    AGE       VERSION   EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION
dt-kube-test-1        Ready     19h       v1.6.1    <none>        Debian GNU/Linux 8 (jessie)   3.16.0-4-amd64
dt-kube-test-2        Ready     19h       v1.6.1    <none>        Debian GNU/Linux 8 (jessie)   3.16.0-4-amd64
dt-kube-test-3        Ready     19h       v1.6.1    <none>        Debian GNU/Linux 8 (jessie)   3.16.0-4-amd64
dt-kube-test-4        Ready     19h       v1.6.1    <none>        Debian GNU/Linux 8 (jessie)   3.16.0-4-amd64
dt-kube-test-5        Ready     19h       v1.6.1    <none>        Debian GNU/Linux 8 (jessie)   3.16.0-4-amd64
dt-kube-test-master   Ready     19h       v1.6.1    <none>        Debian GNU/Linux 8 (jessie)   3.16.0-4-amd64

$ kubectl get pods,svc,ep,deploy -n kube-system -o wide
NAME                                             READY     STATUS    RESTARTS   AGE       IP               NODE
po/etcd-dt-kube-test-master                      1/1       Running   1          19h       192.168.141.24   dt-kube-test-master
po/kube-apiserver-dt-kube-test-master            1/1       Running   1          19h       192.168.141.24   dt-kube-test-master
po/kube-controller-manager-dt-kube-test-master   1/1       Running   1          19h       192.168.141.24   dt-kube-test-master
po/kube-dns-3913472980-p5qr8                     3/3       Running   0          19h       10.40.0.1        dt-kube-test-5
po/kube-proxy-5fbwt                              1/1       Running   0          18h       192.168.141.22   dt-kube-test-5
po/kube-proxy-5jwrf                              1/1       Running   0          18h       192.168.141.20   dt-kube-test-2
po/kube-proxy-7qtsl                              1/1       Running   0          18h       192.168.141.23   dt-kube-test-4
po/kube-proxy-b3bmt                              1/1       Running   0          18h       192.168.141.19   dt-kube-test-1
po/kube-proxy-vjsx8                              1/1       Running   0          18h       192.168.141.24   dt-kube-test-master
po/kube-proxy-z4fmn                              1/1       Running   0          18h       192.168.141.21   dt-kube-test-3
po/kube-scheduler-dt-kube-test-master            1/1       Running   1          19h       192.168.141.24   dt-kube-test-master
po/weave-net-22hvm                               2/2       Running   0          19h       192.168.141.23   dt-kube-test-4
po/weave-net-5g135                               2/2       Running   0          19h       192.168.141.19   dt-kube-test-1
po/weave-net-f484j                               2/2       Running   0          19h       192.168.141.24   dt-kube-test-master
po/weave-net-wz451                               2/2       Running   0          19h       192.168.141.21   dt-kube-test-3
po/weave-net-x9h4r                               2/2       Running   0          19h       192.168.141.20   dt-kube-test-2
po/weave-net-zf827                               2/2       Running   0          19h       192.168.141.22   dt-kube-test-5

NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE       SELECTOR
svc/kube-dns   10.96.0.10   <none>        53/UDP,53/TCP   19h       k8s-app=kube-dns

NAME                         ENDPOINTS                   AGE
ep/kube-controller-manager   <none>                      19h
ep/kube-dns                  10.40.0.1:53,10.40.0.1:53   19h
ep/kube-scheduler            <none>                      19h

NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINER(S)              IMAGE(S)                                                                                                                                                                   SELECTOR
deploy/kube-dns   1         1         1            1           19h       kubedns,dnsmasq,sidecar   gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1,gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1,gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1   k8s-app=kube-dns

  • everything looks fine from my point of view
  • then deploying the wordpress container

kubectl create -f wordpress.yaml

  • YAML looks like
apiVersion: v1
kind: Pod
metadata:
  name: wordpress
  labels:
    name: wordpress
spec:
  containers:
    - image: wordpress
      name: wordpress
      env:
        - name: WORDPRESS_DB_PASSWORD
          # change this - must match mysql.yaml password
          value: yourpassword
      ports:
        - containerPort: 80
          name: wordpress

  • Setting up service expected to be OnlyLocal and preserve client IP

kubectl create -f wordpress-service.yaml

  • YAML looks like
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/external-traffic: "OnlyLocal"
  labels:
    name: wpfrontend
  name: wpfrontend
spec:
  ports:
    - port: 80
  selector:
    name: wordpress
  type: NodePort
  • Created PODS, services and endpoints look like
$ kubectl get pods,svc,ep -o wide
NAME           READY     STATUS    RESTARTS   AGE       IP          NODE
po/mysql       1/1       Running   0          19h       10.44.0.1   dt-kube-test-3
po/wordpress   1/1       Running   0          57s       10.40.0.2   dt-kube-test-5

NAME             CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE       SELECTOR
svc/kubernetes   10.96.0.1        <none>        443/TCP        19h       <none>
svc/mysql        10.108.23.252    <none>        3306/TCP       19h       name=mysql
svc/wpfrontend   10.109.104.198   <nodes>       80:30555/TCP   2m        name=wordpress

NAME            ENDPOINTS             AGE
ep/kubernetes   192.168.141.24:6443   19h
ep/mysql        10.44.0.1:3306        19h
ep/wpfrontend   10.40.0.2:80          2m
  • expected result looking to the wordpress logs
  • client ip of external traffic will be preserved
  • result: I see the MASQ IP in the apache logs
  • if I access the NodePort on the FloatingIP of the Openstack VM
  • using wget to access floating IP
$ wget http://62.50.111.95:30555
--2017-04-26 19:26:19--  http://62.50.111.95:30555/
Verbindungsaufbau zu 62.50.111.95:30555 … verbunden.
HTTP-Anforderung gesendet, auf Antwort wird gewartet … 302 Found
Platz: http://62.50.111.95:30555/wp-admin/install.php [folgend]
--2017-04-26 19:26:20--  http://62.50.111.95:30555/wp-admin/install.php
Wiederverwendung der bestehenden Verbindung zu 62.50.111.95:30555.
HTTP-Anforderung gesendet, auf Antwort wird gewartet … 200 OK
Länge: nicht spezifiziert [text/html]
Wird in »»index.html«« gespeichert.

index.html                                                   [ <=>                                                                                                                              ]  10,88K  --.-KB/s    in 0,05s   
2017-04-26 19:26:21 (202 KB/s) - »index.html« gespeichert [11142]
  • Logs from the container
$ kubectl logs po/wordpress -f
WordPress not found in /var/www/html - copying now...
Complete! WordPress has been successfully copied to /var/www/html
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.40.0.2. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.40.0.2. Set the 'ServerName' directive globally to suppress this message
[Wed Apr 26 17:23:59.743178 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/5.6.30 configured -- resuming normal operations
[Wed Apr 26 17:23:59.743230 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
10.40.0.0 - - [26/Apr/2017:17:26:17 +0000] "GET / HTTP/1.1" 302 383 "-" "Wget/1.18 (darwin16.0.0)"
10.40.0.0 - - [26/Apr/2017:17:26:18 +0000] "GET /wp-admin/install.php HTTP/1.1" 200 11515 "-" "Wget/1.18 (darwin16.0.0)"
  • Service configuration in full YAML
$ kubectl get svc/wpfrontend -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/external-traffic: OnlyLocal
  creationTimestamp: 2017-04-26T17:21:49Z
  labels:
    name: wpfrontend
  name: wpfrontend
  namespace: default
  resourceVersion: "117507"
  selfLink: /api/v1/namespaces/default/services/wpfrontend
  uid: d4c22e87-2aa4-11e7-ad86-fa163e2814da
spec:
  clusterIP: 10.109.104.198
  ports:
  - nodePort: 30555
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    name: wordpress
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
  • The annotation is in place
  • The node-port 30555 is only working on the local node
  • in tcpdump on the node the pod is running on I can see in external IF
# tcpdump -n -i eth0 port 30555
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
19:49:13.747101 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [SEW], seq 2796682649, win 65535, options [mss 1198,nop,wscale 5,nop,nop,TS val 897439112 ecr 0,sackOK,eol], length 0
19:49:13.747260 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [S.E], seq 389384330, ack 2796682650, win 26480, options [mss 1336,sackOK,TS val 17857533 ecr 897439112,nop,wscale 7], length 0
19:49:13.774025 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 1, win 4113, options [nop,nop,TS val 897439143 ecr 17857533], length 0
19:49:13.826017 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [P.], seq 1:147, ack 1, win 4113, options [nop,nop,TS val 897439143 ecr 17857533], length 146
19:49:13.826121 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [.], ack 147, win 216, options [nop,nop,TS val 17857553 ecr 897439143], length 0
19:49:13.882326 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [P.], seq 1:384, ack 147, win 216, options [nop,nop,TS val 17857567 ecr 897439143], length 383
19:49:13.908639 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 384, win 4101, options [nop,nop,TS val 897439272 ecr 17857567], length 0
19:49:13.914183 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [P.], seq 147:313, ack 384, win 4101, options [nop,nop,TS val 897439272 ecr 17857567], length 166
19:49:13.914236 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [.], ack 313, win 224, options [nop,nop,TS val 17857575 ecr 897439272], length 0
19:49:14.923284 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [.], seq 384:2756, ack 313, win 224, options [nop,nop,TS val 17857827 ecr 897439272], length 2372
19:49:14.923478 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [.], seq 2756:5128, ack 313, win 224, options [nop,nop,TS val 17857827 ecr 897439272], length 2372
19:49:14.923627 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [.], seq 5128:7500, ack 313, win 224, options [nop,nop,TS val 17857827 ecr 897439272], length 2372
19:49:14.923771 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [P.], seq 7500:8749, ack 313, win 224, options [nop,nop,TS val 17857827 ecr 897439272], length 1249
19:49:14.946367 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [.], seq 8749:11121, ack 313, win 224, options [nop,nop,TS val 17857833 ecr 897439272], length 2372
19:49:15.015701 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 2756, win 4032, options [nop,nop,TS val 897440379 ecr 17857827], length 0
19:49:15.015749 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 5128, win 3958, options [nop,nop,TS val 897440379 ecr 17857827], length 0
19:49:15.015757 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 7500, win 3884, options [nop,nop,TS val 897440379 ecr 17857827], length 0
19:49:15.015764 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 8749, win 3845, options [nop,nop,TS val 897440379 ecr 17857827], length 0
19:49:15.015770 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 11121, win 3771, options [nop,nop,TS val 897440379 ecr 17857833], length 0
19:49:15.015828 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [P.], seq 11121:11899, ack 313, win 224, options [nop,nop,TS val 17857850 ecr 897440379], length 778
19:49:15.020674 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 11121, win 4021, options [nop,nop,TS val 897440381 ecr 17857833], length 0
19:49:15.020711 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 11121, win 4096, options [nop,nop,TS val 897440381 ecr 17857833], length 0
19:49:15.034223 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 11899, win 4071, options [nop,nop,TS val 897440398 ecr 17857850], length 0
19:49:15.040302 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [F.], seq 313, ack 11899, win 4096, options [nop,nop,TS val 897440399 ecr 17857850], length 0
19:49:15.040434 IP 192.168.141.22.30555 > 212.9.183.78.64387: Flags [F.], seq 11899, ack 314, win 224, options [nop,nop,TS val 17857856 ecr 897440399], length 0
19:49:15.059801 IP 212.9.183.78.64387 > 192.168.141.22.30555: Flags [.], ack 11900, win 4096, options [nop,nop,TS val 897440421 ecr 17857856], length 0
  • Nothing on docker0 interface
  • Nothing on weave interface
  • Nothing on datapath interface
  • Nohting on any other Interface with port 30555
  • so I assume the traffic is already port-translated
  • so I looked for port 80 traffic
  • then I can find the traffic MASQ on weave Interface
# tcpdump -n -i weave port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on weave, link-type EN10MB (Ethernet), capture size 262144 bytes
19:55:56.155870 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [S], seq 3040451662, win 65535, options [mss 1198,nop,wscale 5,nop,nop,TS val 897840275 ecr 0,sackOK,eol], length 0
19:55:56.155934 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [S.], seq 625537050, ack 3040451663, win 26480, options [mss 1336,sackOK,TS val 17958135 ecr 897840275,nop,wscale 7], length 0
19:55:56.182634 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 625537051, win 4113, options [nop,nop,TS val 897840304 ecr 17958135], length 0
19:55:56.232856 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [P.], seq 0:146, ack 1, win 4113, options [nop,nop,TS val 897840304 ecr 17958135], length 146: HTTP: GET / HTTP/1.1
19:55:56.232971 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [.], ack 147, win 216, options [nop,nop,TS val 17958154 ecr 897840304], length 0
19:55:56.295822 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [P.], seq 1:384, ack 147, win 216, options [nop,nop,TS val 17958170 ecr 897840304], length 383: HTTP: HTTP/1.1 302 Found
19:55:56.314389 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 384, win 4101, options [nop,nop,TS val 897840434 ecr 17958170], length 0
19:55:56.318685 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [P.], seq 146:312, ack 384, win 4101, options [nop,nop,TS val 897840434 ecr 17958170], length 166: HTTP: GET /wp-admin/install.php HTTP/1.1
19:55:56.318709 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [.], ack 313, win 224, options [nop,nop,TS val 17958176 ecr 897840434], length 0
19:55:57.339544 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [.], seq 384:2756, ack 313, win 224, options [nop,nop,TS val 17958431 ecr 897840434], length 2372: HTTP: HTTP/1.1 200 OK
19:55:57.339824 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [.], seq 2756:5128, ack 313, win 224, options [nop,nop,TS val 17958431 ecr 897840434], length 2372: HTTP
19:55:57.339970 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [.], seq 5128:7500, ack 313, win 224, options [nop,nop,TS val 17958431 ecr 897840434], length 2372: HTTP
19:55:57.340117 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [P.], seq 7500:8749, ack 313, win 224, options [nop,nop,TS val 17958431 ecr 897840434], length 1249: HTTP
19:55:57.358701 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 2756, win 4027, options [nop,nop,TS val 897841473 ecr 17958431], length 0
19:55:57.363693 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 5128, win 3958, options [nop,nop,TS val 897841473 ecr 17958431], length 0
19:55:57.363705 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 6314, win 4096, options [nop,nop,TS val 897841473 ecr 17958431], length 0
19:55:57.363711 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 8686, win 4021, options [nop,nop,TS val 897841476 ecr 17958431], length 0
19:55:57.363717 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 8686, win 4096, options [nop,nop,TS val 897841476 ecr 17958431], length 0
19:55:57.363739 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 8749, win 4094, options [nop,nop,TS val 897841476 ecr 17958431], length 0
19:55:57.370742 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [.], seq 8749:11121, ack 313, win 224, options [nop,nop,TS val 17958439 ecr 897841476], length 2372: HTTP
19:55:57.370817 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [P.], seq 11121:11899, ack 313, win 224, options [nop,nop,TS val 17958439 ecr 897841476], length 778: HTTP
19:55:57.397226 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 11121, win 4058, options [nop,nop,TS val 897841505 ecr 17958439], length 0
19:55:57.397244 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 11899, win 4034, options [nop,nop,TS val 897841505 ecr 17958439], length 0
19:55:57.397251 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [F.], seq 312, ack 11899, win 4096, options [nop,nop,TS val 897841506 ecr 17958439], length 0
19:55:57.397408 IP 10.40.0.2.80 > 212.9.183.78.64411: Flags [F.], seq 11899, ack 314, win 224, options [nop,nop,TS val 17958446 ecr 897841506], length 0
19:55:57.461604 IP 10.40.0.0.64411 > 10.40.0.2.80: Flags [.], ack 11900, win 4096, options [nop,nop,TS val 897841554 ecr 17958446], length 0
  • iptables-save from the node where the POD runs and I did the debugging
# iptables-save 
# Generated by iptables-save v1.4.21 on Wed Apr 26 19:57:39 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [4:240]
:POSTROUTING ACCEPT [4:240]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3E4LNQKKWZF7G6SH - [0:0]
:KUBE-SEP-MDTVK4HFPOPAEIYP - [0:0]
:KUBE-SEP-OEY6JJQSBCQPRKHS - [0:0]
:KUBE-SEP-VEBDCE2HZVWWIDTR - [0:0]
:KUBE-SEP-ZZWTYWPNMXVEGZHF - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-M7XME3WTB36R42AM - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-WPVPFUBZSXLUXOBX - [0:0]
:KUBE-XLB-WPVPFUBZSXLUXOBX - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/wpfrontend:" -m tcp --dport 30555 -j KUBE-XLB-WPVPFUBZSXLUXOBX
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3E4LNQKKWZF7G6SH -s 10.40.0.1/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-3E4LNQKKWZF7G6SH -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.40.0.1:53
-A KUBE-SEP-MDTVK4HFPOPAEIYP -s 192.168.141.24/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-MDTVK4HFPOPAEIYP -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-MDTVK4HFPOPAEIYP --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.141.24:6443
-A KUBE-SEP-OEY6JJQSBCQPRKHS -s 10.40.0.1/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-OEY6JJQSBCQPRKHS -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.40.0.1:53
-A KUBE-SEP-VEBDCE2HZVWWIDTR -s 10.44.0.1/32 -m comment --comment "default/mysql:" -j KUBE-MARK-MASQ
-A KUBE-SEP-VEBDCE2HZVWWIDTR -p tcp -m comment --comment "default/mysql:" -m tcp -j DNAT --to-destination 10.44.0.1:3306
-A KUBE-SEP-ZZWTYWPNMXVEGZHF -s 10.40.0.2/32 -m comment --comment "default/wpfrontend:" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZZWTYWPNMXVEGZHF -p tcp -m comment --comment "default/wpfrontend:" -m tcp -j DNAT --to-destination 10.40.0.2:80
-A KUBE-SERVICES ! -s 10.32.0.0/12 -d 10.108.23.252/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.108.23.252/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-M7XME3WTB36R42AM
-A KUBE-SERVICES ! -s 10.32.0.0/12 -d 10.109.104.198/32 -p tcp -m comment --comment "default/wpfrontend: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.109.104.198/32 -p tcp -m comment --comment "default/wpfrontend: cluster IP" -m tcp --dport 80 -j KUBE-SVC-WPVPFUBZSXLUXOBX
-A KUBE-SERVICES ! -s 10.32.0.0/12 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.32.0.0/12 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.32.0.0/12 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-3E4LNQKKWZF7G6SH
-A KUBE-SVC-M7XME3WTB36R42AM -m comment --comment "default/mysql:" -j KUBE-SEP-VEBDCE2HZVWWIDTR
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-MDTVK4HFPOPAEIYP --mask 255.255.255.255 --rsource -j KUBE-SEP-MDTVK4HFPOPAEIYP
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-MDTVK4HFPOPAEIYP
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-OEY6JJQSBCQPRKHS
-A KUBE-SVC-WPVPFUBZSXLUXOBX -m comment --comment "default/wpfrontend:" -j KUBE-SEP-ZZWTYWPNMXVEGZHF
-A KUBE-XLB-WPVPFUBZSXLUXOBX -s 10.32.0.0/12 -m comment --comment "Redirect pods trying to reach external loadbalancer VIP to clusterIP" -j KUBE-SVC-WPVPFUBZSXLUXOBX
-A KUBE-XLB-WPVPFUBZSXLUXOBX -m comment --comment "Balancing rule 0 for default/wpfrontend:" -j KUBE-SEP-ZZWTYWPNMXVEGZHF
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Wed Apr 26 19:57:39 2017
# Generated by iptables-save v1.4.21 on Wed Apr 26 19:57:39 2017
*filter
:INPUT ACCEPT [308:79317]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [330:38364]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
:fail2ban-ssh - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
-A INPUT -d 172.17.0.1/32 -i docker0 -p tcp -m tcp --dport 6783 -j DROP
-A INPUT -d 172.17.0.1/32 -i docker0 -p udp -m udp --dport 6783 -j DROP
-A INPUT -d 172.17.0.1/32 -i docker0 -p udp -m udp --dport 6784 -j DROP
-A INPUT -i docker0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i docker0 -p tcp -m tcp --dport 53 -j ACCEPT
-A FORWARD -i docker0 -o weave -j DROP
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o weave -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-4vtqMI<kx/2]jD%_c0S%thO%V dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]<@p dst -j ACCEPT
-A fail2ban-ssh -j RETURN
COMMIT
# Completed on Wed Apr 26 19:57:40 2017
  • Did I get something wrong
  • I expected this setup to preserve the client IP while accessing the wordpress pod apache

@MrHohn: If you need any more information please let me know

@tomte76 tomte76 changed the title Unexpected bahaviour using "OnlyLocal" annotation on NodePort with 1.6.1 and Weave-Net 1.6 Unexpected behaviour using "OnlyLocal" annotation on NodePort with 1.6.1 and Weave-Net 1.6 Apr 26, 2017
@MrHohn
Copy link
Member

MrHohn commented Apr 26, 2017

/assign

@MrHohn
Copy link
Member

MrHohn commented Apr 26, 2017

Thanks for the detailed info. It seems that the source address indeed got masqueraded from 212.9.183.78 to 10.40.0.0 when packages reached the wordpress pod (10.40.0.2).

I couldn't find suspicious rules in iptables-save output. The weave masquerade rules happen in POSTROUTING chain so I think it shouldn't affect local packets. Update: Seems like below rules broke source IP preservation.

-A POSTROUTING -j WEAVE
...
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE

Pinging more folks @bowei @kubernetes/sig-network-bugs

@tomte76
Copy link
Author

tomte76 commented Apr 27, 2017

I reinstalled the whole cluster to be sure there is nothing left from old installations and testings. Kubernetes is now 1.6.2. All other versions remain the same. The problem still persists except the MASQ adress now moved to 10.32.0.1.

10.32.0.1 - - [26/Apr/2017:22:29:49 +0000] "GET / HTTP/1.1" 302 383 "-" "Wget/1.18 (darwin16.0.0)"
10.32.0.1 - - [26/Apr/2017:22:29:49 +0000] "GET /wp-admin/install.php HTTP/1.1" 200 11515 "-" "Wget/1.18 (darwin16.0.0)"

Pods and services are deployed as follows

$ kubectl get pods,svc,ep,deploy -o wide
NAME           READY     STATUS    RESTARTS   AGE       IP          NODE
po/mysql       1/1       Running   0          10h       10.44.0.1   dt-kube-test-3
po/wordpress   1/1       Running   0          10h       10.32.0.2   dt-kube-test-2

NAME             CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE       SELECTOR
svc/kubernetes   10.96.0.1        <none>        443/TCP        10h       <none>
svc/mysql        10.104.246.243   <none>        3306/TCP       10h       name=mysql
svc/wpfrontend   10.106.212.133   <nodes>       80:31062/TCP   10h       name=wordpress

NAME            ENDPOINTS             AGE
ep/kubernetes   192.168.141.30:6443   10h
ep/mysql        10.44.0.1:3306        10h
ep/wpfrontend   10.32.0.2:80          10h

As I can see on dt-kube-test-2 the NodePort 31062 is opened by kube-proxy

root@dt-kube-test-2:~# netstat -tulpen 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      0          11556       768/sshd        
tcp        0      0 0.0.0.0:25              0.0.0.0:*               LISTEN      0          10707       655/exim4       
tcp        0      0 0.0.0.0:6783            0.0.0.0:*               LISTEN      0          80859       20591/weaver    
tcp        0      0 127.0.0.1:6784          0.0.0.0:*               LISTEN      0          80863       20591/weaver    
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      0          78988       20059/kubelet   
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      0          79485       20144/kube-proxy
tcp6       0      0 :::31062                :::*                    LISTEN      0          84524       20144/kube-proxy
tcp6       0      0 :::22                   :::*                    LISTEN      0          11558       768/sshd        
tcp6       0      0 :::25                   :::*                    LISTEN      0          10706       655/exim4       
tcp6       0      0 :::6781                 :::*                    LISTEN      0          81733       20723/weave-npc 
tcp6       0      0 :::6782                 :::*                    LISTEN      0          80865       20591/weaver    
tcp6       0      0 :::4194                 :::*                    LISTEN      0          78972       20059/kubelet   
tcp6       0      0 :::10250                :::*                    LISTEN      0          78993       20059/kubelet   
tcp6       0      0 :::10255                :::*                    LISTEN      0          78995       20059/kubelet   
udp        0      0 0.0.0.0:6783            0.0.0.0:*                           0          80858       20591/weaver    
udp        0      0 0.0.0.0:6784            0.0.0.0:*                           0          80663       -               
udp        0      0 0.0.0.0:7754            0.0.0.0:*                           0          9527        320/dhclient    
udp        0      0 0.0.0.0:68              0.0.0.0:*                           0          10353       320/dhclient    
udp        0      0 10.32.0.1:123           0.0.0.0:*                           106        81487       503/ntpd        
udp        0      0 172.17.0.1:123          0.0.0.0:*                           106        12551       503/ntpd        
udp        0      0 192.168.141.26:123      0.0.0.0:*                           0          10663       503/ntpd        
udp        0      0 127.0.0.1:123           0.0.0.0:*                           0          10662       503/ntpd        
udp        0      0 0.0.0.0:123             0.0.0.0:*                           0          10649       503/ntpd        
udp        0      0 0.0.0.0:8472            0.0.0.0:*                           0          34429       -               
udp6       0      0 :::34619                :::*                                0          9528        320/dhclient    
udp6       0      0 fe80::d0ab:c1ff:fe9:123 :::*                                106        85096       503/ntpd        
udp6       0      0 fe80::8d:56ff:fefb::123 :::*                                106        81491       503/ntpd        
udp6       0      0 fe80::dc85:35ff:fed:123 :::*                                106        81490       503/ntpd        
udp6       0      0 fe80::3843:6fff:fe4:123 :::*                                106        81489       503/ntpd        
udp6       0      0 fe80::c0e4:92ff:fe7:123 :::*                                106        81488       503/ntpd        
udp6       0      0 fe80::f816:3eff:fe7:123 :::*                                106        12552       503/ntpd        
udp6       0      0 ::1:123                 :::*                                0          10664       503/ntpd        
udp6       0      0 :::123                  :::*                                0          10650       503/ntpd  

I identified the kube-proxy Pod running on dt-kube-test-02 and I can see the following in the logs.

$ kubectl logs -n kube-system kube-proxy-cvh3f
I0426 22:18:06.935575       1 server.go:225] Using iptables Proxier.
I0426 22:18:07.420915       1 server.go:249] Tearing down userspace rules.
I0426 22:18:07.545399       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0426 22:18:07.545934       1 conntrack.go:66] Setting conntrack hashsize to 32768
I0426 22:18:07.546207       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0426 22:18:07.546242       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
E0426 22:25:52.871403       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 22:25:52.871862       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 22:33:07.446209       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 22:33:07.446674       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 22:33:07.446889       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 22:33:07.447285       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 22:48:08.228815       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 22:48:08.229308       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 22:48:08.229652       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 22:48:08.230068       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 22:48:08.230890       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 22:48:08.231200       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:03:08.229112       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:03:08.229207       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:03:08.229326       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:03:08.229611       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:03:08.230059       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:03:08.230129       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:18:08.229369       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:18:08.229463       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:18:08.229774       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:18:08.229839       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:18:08.230188       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:18:08.230361       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:33:08.229580       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:33:08.229676       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:33:08.229823       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:33:08.230109       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:33:08.230578       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:33:08.230651       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:48:08.229971       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:48:08.230054       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:48:08.230225       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:48:08.230446       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0426 23:48:08.230700       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0426 23:48:08.230958       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:03:08.230140       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:03:08.230429       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:03:08.230797       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:03:08.230867       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:03:08.231363       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:03:08.231440       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:18:08.230646       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:18:08.230767       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:18:08.230958       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:18:08.231179       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:18:08.231681       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:18:08.231746       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:33:08.231122       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:33:08.231858       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:33:08.232668       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:33:08.233193       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:48:08.231185       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:48:08.231384       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:48:08.231643       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:48:08.231702       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 00:48:08.232107       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 00:48:08.232160       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:03:08.233222       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:03:08.233563       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:03:08.233886       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:03:08.233960       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:03:08.234400       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:03:08.234465       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:18:08.233623       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:18:08.233852       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:18:08.234235       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:18:08.234311       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:18:08.235028       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:18:08.235097       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:33:08.234295       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:33:08.234699       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:33:08.235222       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:33:08.235411       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:33:08.236028       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:33:08.236239       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:48:08.235745       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:48:08.236317       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:48:08.237715       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:48:08.238022       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 01:48:08.238449       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 01:48:08.239148       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:03:08.236897       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:03:08.237107       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:03:08.237385       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:03:08.237556       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:03:08.238023       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:03:08.238094       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:18:08.235742       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:18:08.235844       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:18:08.236199       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:18:08.236271       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:18:08.236661       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:18:08.236820       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:33:08.235875       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:33:08.235956       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:33:08.236189       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:33:08.236352       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:33:08.236789       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:33:08.236858       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:48:08.236113       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:48:08.236373       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:48:08.236716       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:48:08.236795       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 02:48:08.237167       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 02:48:08.237349       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:03:08.236304       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:03:08.237094       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:03:08.238087       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:03:08.238320       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:18:08.237594       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:18:08.238102       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:18:08.238472       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:18:08.239204       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:33:08.237192       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:33:08.237596       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:33:08.237837       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:33:08.238376       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:48:08.237578       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:48:08.238080       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:48:08.239076       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:48:08.239352       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 03:48:08.240349       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 03:48:08.240670       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:03:08.237924       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:03:08.238661       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:03:08.239442       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:03:08.239725       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:18:08.238383       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:18:08.238474       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:18:08.238822       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:18:08.238892       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:18:08.239337       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:18:08.239413       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:33:08.239594       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:33:08.239876       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:33:08.240084       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:33:08.240318       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:33:08.240787       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:33:08.240874       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:48:08.239069       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:48:08.239160       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:48:08.239474       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:48:08.239538       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 04:48:08.240005       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 04:48:08.240073       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:03:08.239168       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:03:08.239430       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:03:08.239774       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:03:08.239852       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:03:08.240100       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:03:08.240376       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:18:08.239817       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:18:08.239928       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:18:08.240261       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:18:08.240350       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:18:08.240705       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:18:08.240891       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:33:08.239674       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:33:08.240286       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:33:08.240528       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:33:08.240712       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:48:08.240413       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:48:08.240850       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 05:48:08.241230       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 05:48:08.241801       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:03:08.240572       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:03:08.240650       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:03:08.240920       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:03:08.240983       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:03:08.241437       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:03:08.241507       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:18:08.241211       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:18:08.241314       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:18:08.241671       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:18:08.241754       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:18:08.242107       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:18:08.242296       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:33:08.241472       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:33:08.241673       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:33:08.242007       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:33:08.242087       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:33:08.242336       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:33:08.242806       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:48:08.241675       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:48:08.242017       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:48:08.242237       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:48:08.242398       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 06:48:08.242917       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 06:48:08.242983       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:03:08.242190       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:03:08.242271       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:03:08.242546       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:03:08.242616       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:03:08.243107       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:03:08.243171       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:18:08.242703       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:18:08.242795       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:18:08.243111       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:18:08.243178       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:18:08.243627       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:18:08.243696       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:33:08.243050       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:33:08.243353       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:33:08.243895       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:33:08.244075       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:33:08.245070       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:33:08.245266       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:48:08.243506       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:48:08.243600       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:48:08.243959       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:48:08.244028       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 07:48:08.244345       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 07:48:08.244563       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:03:08.243808       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:03:08.243903       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:03:08.244310       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:03:08.244390       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:18:08.244324       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:18:08.244439       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:18:08.244764       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:18:08.244845       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:18:08.245176       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:18:08.245446       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:33:08.244501       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:33:08.244859       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:33:08.245181       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:33:08.245258       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:33:08.245722       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:33:08.245796       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:48:08.245094       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:48:08.245179       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:48:08.245487       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:48:08.245555       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 08:48:08.246006       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 08:48:08.246073       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 09:03:08.245250       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 09:03:08.245598       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 09:03:08.245842       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 09:03:08.246013       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport
E0427 09:03:08.246326       1 proxier.go:180] Service does not contain necessary annotation service.beta.kubernetes.io/healthcheck-nodeport
E0427 09:03:08.246561       1 proxier.go:488] Service "default/wpfrontend" has no healthcheck nodeport

As far as I understand this is a non-critical warning, because Type: NodePort does not set up and use healthcheck-nodeport?

@tomte76
Copy link
Author

tomte76 commented Apr 27, 2017

This message went away if I create a service of "Type: LoadBalancer" (even not having a cloudprovider in place). I can see the healthcheck-nodeport annotation now:

$ kubectl get svc wpfrontend -oyaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/external-traffic":"OnlyLocal"},"labels":{"name":"wpfrontend"},"name":"wpfrontend","namespace":"default"},"spec":{"ports":[{"port":80}],"selector":{"name":"wordpress"},"type":"LoadBalancer"}}
    service.beta.kubernetes.io/external-traffic: OnlyLocal
    service.beta.kubernetes.io/healthcheck-nodeport: "30673"
  creationTimestamp: 2017-04-27T09:58:37Z
  labels:
    name: wpfrontend
  name: wpfrontend
  namespace: default
  resourceVersion: "68671"
  selfLink: /api/v1/namespaces/default/services/wpfrontend
  uid: 156af60c-2b30-11e7-bb45-fa163ef2f7f2
spec:
  clusterIP: 10.96.20.48
  ports:
  - nodePort: 31484
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    name: wordpress
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}

If I access the nodePort I still have the MASQ IP:

10.32.0.1 - - [27/Apr/2017:10:00:24 +0000] "GET / HTTP/1.1" 302 383 "-" "Wget/1.18 (darwin16.0.0)"
10.32.0.1 - - [27/Apr/2017:10:00:25 +0000] "GET /wp-admin/install.php HTTP/1.1" 200 11515 "-" "Wget/1.18 (darwin16.0.0)"

If I curl the healthcheck-nodeport from external, I get

{
	"service": {
		"namespace": "default",
		"name": "wpfrontend"
	},
	"localEndpoints": 1
}

This looks good so far from my point of view.

Putting it all together I assume that kube-proxy takes the connection and forwards it to the backend. Using the interface weave and it's adress 10.32.0.1 as outgoing interface for connecting the backend.

10.32.0.1 is the weave interface adress on dt-kube-test-2.

$ /sbin/ifconfig weave
weave     Link encap:Ethernet  HWaddr 3a:43:6f:4d:0a:f3  
          inet addr:10.32.0.1  Bcast:0.0.0.0  Mask:255.240.0.0
          inet6 addr: fe80::3843:6fff:fe4d:af3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:177 errors:0 dropped:0 overruns:0 frame:0
          TX packets:168 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:74210 (72.4 KiB)  TX bytes:65844 (64.3 KiB)

Unfortunately I have no idea why and how to fix it :(

@MrHohn
Copy link
Member

MrHohn commented Apr 28, 2017

Sorry about the delay, I don't have enough insights of weave to explain this issue. I'm now looking into their design and will hopefully have a brief answer soon.

For the nodeport warnings you saw in logs, they are benign --- or we should say they shouldn't show up. #42888 was tracking this and it already got fixed on upstream (#44578). Though the fix is not in k8s 1.6.

@MrHohn
Copy link
Member

MrHohn commented Apr 28, 2017

BTW two quick questions:

  • Where did you run the wget command? Within the cluster or outside?
  • Have you tried to access service IP within the cluster to see if source pod IP is preserved?

@tomte76
Copy link
Author

tomte76 commented Apr 28, 2017

  • I ran the wget commands above from outside the cluster. Even from outside the openstack using a floating ip assigned to the VM of the kubernetes node. The wget to wordpress pods and also the wget to the /healthz healthcheck-nodeport was ran from outside the cluster.

  • I logged into the mysql pod in the cluster, installed wget and did a wget on the ep/wpfrontend

ep/wpfrontend 10.32.0.2:80 10h

the wget output as follows

root@mysql:/# wget http://10.32.0.2:80
converted 'http://10.32.0.2:80' (ANSI_X3.4-1968) -> 'http://10.32.0.2:80' (UTF-8)
--2017-04-28 21:42:52--  http://10.32.0.2/
Connecting to 10.32.0.2:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://10.32.0.2/wp-admin/install.php [following]
converted 'http://10.32.0.2/wp-admin/install.php' (ANSI_X3.4-1968) -> 'http://10.32.0.2/wp-admin/install.php' (UTF-8)
--2017-04-28 21:42:52--  http://10.32.0.2/wp-admin/install.php
Reusing existing connection to 10.32.0.2:80.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: 'index.html'

index.html                                                  [ <=>                                                                                                                           ]  10.83K  --.-KB/s   in 0.02s  

2017-04-28 21:42:53 (449 KB/s) - 'index.html' saved [11088]

the apache log from the wordpress pod

10.44.0.1 - - [28/Apr/2017:21:42:52 +0000] "GET / HTTP/1.1" 302 374 "-" "Wget/1.16 (linux-gnu)"
10.44.0.1 - - [28/Apr/2017:21:42:52 +0000] "GET /wp-admin/install.php HTTP/1.1" 200 11461 "-" "Wget/1.16 (linux-gnu)"

the 10.44.0.1 is the IP adress of the mysql pod as of kubectl get pods -o wide

  • Also I did a wget to the Cluster-IP of the service svc/wpfrontend

svc/wpfrontend 10.96.20.48 <pending> 80:31484/TCP 2h name=wordpress

output of wget on nativ port 80

root@mysql:/# wget http://10.96.20.48 
converted 'http://10.96.20.48' (ANSI_X3.4-1968) -> 'http://10.96.20.48' (UTF-8)
--2017-04-28 21:48:26--  http://10.96.20.48/
Connecting to 10.96.20.48:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://10.96.20.48/wp-admin/install.php [following]
converted 'http://10.96.20.48/wp-admin/install.php' (ANSI_X3.4-1968) -> 'http://10.96.20.48/wp-admin/install.php' (UTF-8)
--2017-04-28 21:48:26--  http://10.96.20.48/wp-admin/install.php
Reusing existing connection to 10.96.20.48:80.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: 'index.html.1'

index.html.1                                                [ <=>                                                                                                                           ]  10.84K  --.-KB/s   in 0.03s  

2017-04-28 21:48:27 (404 KB/s) - 'index.html.1' saved [11100]

apache logs from the wordpress pod

10.44.0.0 - - [28/Apr/2017:21:48:26 +0000] "GET / HTTP/1.1" 302 376 "-" "Wget/1.16 (linux-gnu)"
10.44.0.0 - - [28/Apr/2017:21:48:26 +0000] "GET /wp-admin/install.php HTTP/1.1" 200 11473 "-" "Wget/1.16 (linux-gnu)"

this request seems to be MASQ.

Request to the NodePort on the cluster-ip does not work at all. Request times out.

root@mysql:/# wget http://10.96.20.48:31484 
converted 'http://10.96.20.48:31484' (ANSI_X3.4-1968) -> 'http://10.96.20.48:31484' (UTF-8)
--2017-04-28 21:50:27--  http://10.96.20.48:31484/
Connecting to 10.96.20.48:31484... failed: Connection timed out.
Retrying.

--2017-04-28 21:52:36--  (try: 2)  http://10.96.20.48:31484/
Connecting to 10.96.20.48:31484... ^C

@MrHohn
Copy link
Member

MrHohn commented Apr 28, 2017

Thanks for the experiments.

Request to the NodePort on the cluster-ip does not work at all. Request times out.

Yeah this would not work, nodePort is opened on node IPs but not on service IP.

Quick summarize what you got this time:

  • pod to pod communication preserves source IP.
  • pod to service communication loses source IP. (Unexpected, this should only happen when a pod tries to access service served by itself.)

This is surprising and sounds like the issue is wider than just "OnlyLocal" service. I wonder how do we ensure the compatibility for various k8s network plugins? cc @freehan

@tomte76
Copy link
Author

tomte76 commented Apr 28, 2017

Can you suggest another network plugin I can try to see if the problem goes away? It is not much work to redeploy the whole setup. Even with new VMs or in parallel if necessary . As far as I can appraise at the moment, all we need is a multi-node network overlay. We are not bound to weave if other solutions are available. What we badly need is the possibility to preserve the external ip in on premises kubernetes clusters.

I meanwhile had some success using a NGINX ingress controller. But as it relies on X-Forward-For headers, it will not solve the problem for other protocols. And TLS seems to be available only using SNI which causes concerns in the project team.

@MrHohn
Copy link
Member

MrHohn commented Apr 28, 2017

@tomte76 May you file an issue against weaveworks/weave as well? Weave folks might have more insight of this.

@tomte76
Copy link
Author

tomte76 commented Apr 28, 2017

Yes. I can file this. Maybe I should try first, if the problem disappears if I use another network plugin to ensure it is related to weave?

@MrHohn
Copy link
Member

MrHohn commented Apr 28, 2017

Sorry, your comment popped up after I sent that. Probably try flannel?

@tomte76
Copy link
Author

tomte76 commented Apr 28, 2017

Thank you. I redeployed and installed flannel using kube-flannel-rbac.yml and kube-flannel.yml but at the moment the dns-pod does not start.

po/kube-dns-3913472980-1d76n 0/3 rpc error: code = 2 desc = failed to start container "cc3fb3bc6a1ba24857f015b9f4f7e41b783ac3ffa900904b61119c1aa4c59e2b": Error response from daemon: {"message":"cannot join network of a non running container: fc35edecd1aa09445e86ed4b8aebac7bfc413631a4c08a465096be17a99c0029"} 3 16m <none> dt-kube-test-3

I'll look into that the next days. It's pretty late at night here in germany.

@MrHohn
Copy link
Member

MrHohn commented May 1, 2017

I made some mistakes. The iptables rules mentioned above seems to break the external source IP preservation mechanism:

-A POSTROUTING -j WEAVE
...
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE

The packet flow as below:

  1. External client sends request to node's IP on nodePort.
  2. Packet goes in node through eth0.
  3. PREROUTING: Packet got DNAT to backend pod IP (default/wpfrontend).
  4. FORWARD: Routing decision made to forward this packet through 'weave' bridge.
  5. POSTROUTING: Packet got SNAT to weave's IP (according to the weave rule above).
  6. Packet got sent to backend pod through veth-XXX.

Though I don't have a theory yet for why packet that goes through service VIP would fail to preserve source pod IP.

@bboreham
Copy link
Contributor

bboreham commented May 3, 2017

Hi, I work on Weave Net; just seeing this issue for the first time.

I can understand how OnlyLocal could work in a situation like AWS or GCE where the cloudprovider has set up routes that make pod addresses reachable from anywhere.

I cannot see how it could ever work with an overlay network, in the absence of those routing rules. How would the return packets get back to the original client?

@tomte76
Copy link
Author

tomte76 commented May 3, 2017

In our case we try to find a way to preserve the client ip of connections in our on-premises setup. Therefore we have kubernetes-clusters deployed on VMs in e.g. openstack or proxmox running on bare metal.

As far as I understand I need the overlay network to spread the pods and communication across the minions in my cluster. And we would need OnlyLocal to have some pods running pinned on dedicated VMs we can route IP space on or port-nat on an external firewall to the NodePorts. This would enable us to preserve the external IP in these pods and set up any kind of ingress there. Even if we have to rely on the client's IP in any way (Blacklist, Whitelist, Geo-IP Stuff e.g.)

Behind these pods we would use the overlay to communicate to the backend pods running on different VMs in the cluster. We are aware of the fact, that client source will not be preserved in this step. This step will be a in-cluster communication from the ingress nodes to the backend systems.

This is compareable to the nginx ingress project, which uses host-network to preserve client ip. But this ingress seems to be optimized for HTTP and also supports TLS only with SNI. Also host-network seems to have other side-effects.

This information to clarify, what we are trying to do. Summarized we are trying to set up something like the cloud providers ingress for our on-premises clusters.

@MrHohn
Copy link
Member

MrHohn commented May 3, 2017

cc @dnardo

@tomte76
Copy link
Author

tomte76 commented May 3, 2017

In meanwhile managed it to deploy working flannel instead of weave. Sorry for the delay. I had some trouble with the openstack security groups. The observed behaviour also exists on flannel. The MASQ IP is now taken from the flannel required "--pod-network-cidr=10.244.0.0/16".

The wget from a cluster-external IP on the openstack floating ip and the NodePort as follows:

wget http://62.50.111.95:30764
--2017-05-03 22:22:21--  http://62.50.111.95:30764/
Verbindungsaufbau zu 62.50.111.95:30764 … verbunden.
HTTP-Anforderung gesendet, auf Antwort wird gewartet … 200 OK
Länge: 91 [application/json]
Wird in »index.html« gespeichert.

index.html.1                                     100%[=======================================================================================================>]      91  --.-KB/s    in 0s      

2017-05-03 22:22:21 (5,79 MB/s) - »index.html« gespeichert [91/91]

Logs from the wordpress pod.

10.244.2.1 - - [03/May/2017:20:18:55 +0000] "GET / HTTP/1.1" 302 383 "-" "Wget/1.19.1 (darwin16.4.0)"
10.244.2.1 - - [03/May/2017:20:18:55 +0000] "GET /wp-admin/install.php HTTP/1.1" 200 11515 "-" "Wget/1.19.1 (darwin16.4.0)"

The wordpress pods is located on the dt-kube-minion-2. Looking there I can find the MASQ IP assigned to the cni0 interface

cni0      Link encap:Ethernet  HWaddr 0a:58:0a:f4:02:01  
          inet addr:10.244.2.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::5c54:c6ff:fe2c:d6c1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
          RX packets:521 errors:0 dropped:0 overruns:0 frame:0
          TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:86377 (84.3 KiB)  TX bytes:87816 (85.7 KiB)

The iptables on the dt-kube-minion-2 looks like this

# Generated by iptables-save v1.4.21 on Wed May  3 22:34:58 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4NTE53GTRZSRI4PF - [0:0]
:KUBE-SEP-IT2ZTR26TO4XFPTO - [0:0]
:KUBE-SEP-OFVO54SGCOEBOJ32 - [0:0]
:KUBE-SEP-YIL6JZP7A3QYXJU2 - [0:0]
:KUBE-SEP-YOA7JIOALEOZSTXG - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-M7XME3WTB36R42AM - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-WPVPFUBZSXLUXOBX - [0:0]
:KUBE-XLB-WPVPFUBZSXLUXOBX - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/wpfrontend:" -m tcp --dport 32516 -j KUBE-XLB-WPVPFUBZSXLUXOBX
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4NTE53GTRZSRI4PF -s 10.244.2.2/32 -m comment --comment "default/wpfrontend:" -j KUBE-MARK-MASQ
-A KUBE-SEP-4NTE53GTRZSRI4PF -p tcp -m comment --comment "default/wpfrontend:" -m tcp -j DNAT --to-destination 10.244.2.2:80
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-OFVO54SGCOEBOJ32 -s 10.244.3.2/32 -m comment --comment "default/mysql:" -j KUBE-MARK-MASQ
-A KUBE-SEP-OFVO54SGCOEBOJ32 -p tcp -m comment --comment "default/mysql:" -m tcp -j DNAT --to-destination 10.244.3.2:3306
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-YOA7JIOALEOZSTXG -s 192.168.141.42/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-YOA7JIOALEOZSTXG -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-YOA7JIOALEOZSTXG --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.141.42:6443
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.233.2/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.233.2/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-M7XME3WTB36R42AM
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.150.223/32 -p tcp -m comment --comment "default/wpfrontend: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.150.223/32 -p tcp -m comment --comment "default/wpfrontend: cluster IP" -m tcp --dport 80 -j KUBE-SVC-WPVPFUBZSXLUXOBX
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-M7XME3WTB36R42AM -m comment --comment "default/mysql:" -j KUBE-SEP-OFVO54SGCOEBOJ32
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-YOA7JIOALEOZSTXG --mask 255.255.255.255 --rsource -j KUBE-SEP-YOA7JIOALEOZSTXG
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-YOA7JIOALEOZSTXG
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-WPVPFUBZSXLUXOBX -m comment --comment "default/wpfrontend:" -j KUBE-SEP-4NTE53GTRZSRI4PF
-A KUBE-XLB-WPVPFUBZSXLUXOBX -s 10.244.0.0/16 -m comment --comment "Redirect pods trying to reach external loadbalancer VIP to clusterIP" -j KUBE-SVC-WPVPFUBZSXLUXOBX
-A KUBE-XLB-WPVPFUBZSXLUXOBX -m comment --comment "Balancing rule 0 for default/wpfrontend:" -j KUBE-SEP-4NTE53GTRZSRI4PF
COMMIT
# Completed on Wed May  3 22:34:58 2017
# Generated by iptables-save v1.4.21 on Wed May  3 22:34:58 2017
*filter
:INPUT ACCEPT [241:77023]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [249:25480]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:fail2ban-ssh - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A fail2ban-ssh -j RETURN
COMMIT
# Completed on Wed May  3 22:34:58 2017

The result looks very similar to weave. I assume the packet gets MASQ on traversing cni0 to reache the pod which has an IP adress in the flannel node assignment 10.244.2.0/24. In this case 10.244.2.2.

NAME           READY     STATUS    RESTARTS   AGE       IP           NODE
po/mysql       1/1       Running   0          24m       10.244.3.2   dt-kube-minion-4
po/wordpress   1/1       Running   0          23m       10.244.2.2   dt-kube-minion-2

NAME             CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE       SELECTOR
svc/kubernetes   10.96.0.1        <none>        443/TCP        31m       <none>
svc/mysql        10.96.233.2      <none>        3306/TCP       24m       name=mysql
svc/wpfrontend   10.104.150.223   <pending>     80:32516/TCP   22m       name=wordpress

@tomte76
Copy link
Author

tomte76 commented May 3, 2017

If I insert one iptables rule

iptables -t nat -I POSTROUTING 3 ! -s 10.244.0.0/16 -d 10.244.2.2/32 -p tcp --dport 80 -j RETURN

it works as expected

212.9.183.78 - - [03/May/2017:21:00:11 +0000] "GET / HTTP/1.1" 302 327 "-" "curl/7.54.0"

iptables-save looks like this now

# Generated by iptables-save v1.4.21 on Wed May  3 23:02:22 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4NTE53GTRZSRI4PF - [0:0]
:KUBE-SEP-IT2ZTR26TO4XFPTO - [0:0]
:KUBE-SEP-OFVO54SGCOEBOJ32 - [0:0]
:KUBE-SEP-YIL6JZP7A3QYXJU2 - [0:0]
:KUBE-SEP-YOA7JIOALEOZSTXG - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-M7XME3WTB36R42AM - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-WPVPFUBZSXLUXOBX - [0:0]
:KUBE-XLB-WPVPFUBZSXLUXOBX - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.2.2/32 -p tcp -m tcp --dport 80 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/wpfrontend:" -m tcp --dport 32516 -j KUBE-XLB-WPVPFUBZSXLUXOBX
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4NTE53GTRZSRI4PF -s 10.244.2.2/32 -m comment --comment "default/wpfrontend:" -j KUBE-MARK-MASQ
-A KUBE-SEP-4NTE53GTRZSRI4PF -p tcp -m comment --comment "default/wpfrontend:" -m tcp -j DNAT --to-destination 10.244.2.2:80
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-OFVO54SGCOEBOJ32 -s 10.244.3.2/32 -m comment --comment "default/mysql:" -j KUBE-MARK-MASQ
-A KUBE-SEP-OFVO54SGCOEBOJ32 -p tcp -m comment --comment "default/mysql:" -m tcp -j DNAT --to-destination 10.244.3.2:3306
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-YOA7JIOALEOZSTXG -s 192.168.141.42/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-YOA7JIOALEOZSTXG -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-YOA7JIOALEOZSTXG --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.141.42:6443
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.233.2/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.233.2/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-M7XME3WTB36R42AM
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.150.223/32 -p tcp -m comment --comment "default/wpfrontend: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.150.223/32 -p tcp -m comment --comment "default/wpfrontend: cluster IP" -m tcp --dport 80 -j KUBE-SVC-WPVPFUBZSXLUXOBX
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-M7XME3WTB36R42AM -m comment --comment "default/mysql:" -j KUBE-SEP-OFVO54SGCOEBOJ32
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-YOA7JIOALEOZSTXG --mask 255.255.255.255 --rsource -j KUBE-SEP-YOA7JIOALEOZSTXG
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-YOA7JIOALEOZSTXG
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-WPVPFUBZSXLUXOBX -m comment --comment "default/wpfrontend:" -j KUBE-SEP-4NTE53GTRZSRI4PF
-A KUBE-XLB-WPVPFUBZSXLUXOBX -s 10.244.0.0/16 -m comment --comment "Redirect pods trying to reach external loadbalancer VIP to clusterIP" -j KUBE-SVC-WPVPFUBZSXLUXOBX
-A KUBE-XLB-WPVPFUBZSXLUXOBX -m comment --comment "Balancing rule 0 for default/wpfrontend:" -j KUBE-SEP-4NTE53GTRZSRI4PF
COMMIT
# Completed on Wed May  3 23:02:22 2017
# Generated by iptables-save v1.4.21 on Wed May  3 23:02:22 2017
*filter
:INPUT ACCEPT [118:40917]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [172:20043]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:fail2ban-ssh - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A fail2ban-ssh -j RETURN
COMMIT
# Completed on Wed May  3 23:02:22 2017

@bboreham
Copy link
Contributor

bboreham commented May 4, 2017

So what does the return path look like? Are the packets from the pod coming back to the original client with the pod's IP as source address, or is that getting rewritten to the service IP?

@MrHohn
Copy link
Member

MrHohn commented May 4, 2017

So what does the return path look like? Are the packets from the pod coming back to the original client with the pod's IP as source address, or is that getting rewritten to the service IP?

In this case, service IP shouldn't be involved, as the original destination IP is the node IP instead of service IP. I'd expect the return packets' source address got rewritten to the node IP.

@MrHohn
Copy link
Member

MrHohn commented May 4, 2017

I can understand how OnlyLocal could work in a situation like AWS or GCE where the cloudprovider has set up routes that make pod addresses reachable from anywhere.

I cannot see how it could ever work with an overlay network, in the absence of those routing rules. How would the return packets get back to the original client?

@bboreham Also to clarify, this OnlyLocal feature ensures all external requests sent to the specific service (through nodeIP:serviceNodePort) be routed to backend pods on the same node. The incoming path will remain on the node without going out to other nodes. So it doesn't seem like the routing rules outside of this node is required.

@bboreham
Copy link
Contributor

bboreham commented May 5, 2017

OK, I marked the Weave Net issue as a feature request, since it is currently hard-coded to masquerade everything on and off the overlay; suggest you close this issue.

@manuperera
Copy link

manuperera commented May 5, 2017

I have the same problem. I have a service that is listening on port 1813 and I have included the next configuration to preserve source IP inside the container that runs in Kubernetes.

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/external-traffic: "OnlyLocal"
  labels:
    name: connector-udp
  name: connector-udp
spec:
  ports:
    # The port that this service should serve on.
    - port: 1813
      name: accounting
      targetPort: 1813
      nodePort: 31813
      protocol: UDP
  # Label keys and values that must match in order to receive traffic for this service.
  externalIPs:
    - "172.19.18.72"
  selector:
     app: connector
  type: LoadBalancer

image

When I send traffic UDP from a simulator to de Kubernetes's service I see that the source IP not is the ip addres of simulator's machine.

root@connector-7z612:/# tcpdump -n dst port 1813
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
09:44:14.602461 IP 10.1.47.0.35686 > 10.1.67.2.1813: RADIUS, Accounting-Request (4), id: 0x01 length: 164
09:44:15.502967 IP 10.1.47.0.35686 > 10.1.67.2.1813: RADIUS, Accounting-Request (4), id: 0x02 length: 164
09:44:16.404781 IP 10.1.47.0.35686 > 10.1.67.2.1813: RADIUS, Accounting-Request (4), id: 0x03 length: 164
09:44:17.305984 IP 10.1.47.0.35686 > 10.1.67.2.1813: RADIUS, Accounting-Request (4), id: 0x04 length: 164
09:44:18.207565 IP 10.1.47.0.35686 > 10.1.67.2.1813: RADIUS, Accounting-Request (4), id: 0x05 length: 164
09:44:19.109202 IP 10.1.47.0.35686 > 10.1.67.2.1813: RADIUS, Accounting-Request (4), id: 0x06 length: 164

Otherwise, on the CoreOS host I see the source IP correctly.

root@coreos002:~# tcpdump -n dst port 1813
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
09:44:14.584830 IP 172.19.18.53.35686 > 172.19.18.72.1813: RADIUS, Accounting-Request (4), id: 0x01 length: 164
09:44:15.485514 IP 172.19.18.53.35686 > 172.19.18.72.1813: RADIUS, Accounting-Request (4), id: 0x02 length: 164
09:44:16.387202 IP 172.19.18.53.35686 > 172.19.18.72.1813: RADIUS, Accounting-Request (4), id: 0x03 length: 164
09:44:17.288524 IP 172.19.18.53.35686 > 172.19.18.72.1813: RADIUS, Accounting-Request (4), id: 0x04 length: 164
09:44:18.190003 IP 172.19.18.53.35686 > 172.19.18.72.1813: RADIUS, Accounting-Request (4), id: 0x05 length: 164
09:44:19.091797 IP 172.19.18.53.35686 > 172.19.18.72.1813: RADIUS, Accounting-Request (4), id: 0x06 length: 164

I'm using flannel and calico to solve network configuration, so I dont think that only is a problem with Weave Net.

There are any solution for my problem.
Thanks in advanced.

@thockin
Copy link
Member

thockin commented May 5, 2017 via email

@dcbw dcbw added the sig/network Categorizes an issue or PR as relevant to SIG Network. label May 18, 2017
@caseydavenport
Copy link
Member

Adding @tomdee for the flannel/canal bits.

@caseydavenport
Copy link
Member

I've raised this issue against flannel: flannel-io/flannel#734

We can close this now.

@MrHohn
Copy link
Member

MrHohn commented May 25, 2017

@caseydavenport Thanks!

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kube-proxy sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests

8 participants