Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POD to Service no response #1073

Closed
prasenforu opened this issue Sep 1, 2017 · 38 comments
Closed

POD to Service no response #1073

prasenforu opened this issue Sep 1, 2017 · 38 comments

Comments

@prasenforu
Copy link

prasenforu commented Sep 1, 2017

Version

CentOS 3.7
OC 3.6
Ansible 2.3
docker 1.12.6
kubectl 1.6.1

NO policy setup as of now

Ansible hostfile

[OSEv3:children]
nodes
masters
nfs
etcd

[OSEv3:vars]
openshift_master_default_subdomain=cloudapps.cloud-cafe.in
ansible_ssh_user=root
deployment_type=origin
os_sdn_network_plugin_name=cni
openshift_use_calico=true
openshift_use_openshift_sdn=false
openshift_disable_check=disk_availability,memory_availability
openshift_release=v3.6
openshift_image_tag=v3.6.0


# Comment the following to disable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/users.htpasswd'}]

[nodes]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX openshift_node_labels="{'region': 'infra'}" openshift_schedulable=False
ose-hub  openshift_ip=10.90.1.209 openshift_public_ip=10.90.1.209 openshift_hostname=ose-hub.cloud-cafe.in openshift_public_hostname=ose-hub.cloud-cafe.in openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True
ose-node1  openshift_ip=10.90.2.210 openshift_public_ip=10.90.2.210 openshift_hostname=ose-node1.cloud-cafe.in openshift_public_hostname=ose-node1.cloud-cafe.in openshift_schedulable=True
ose-node2  openshift_ip=10.90.2.211 openshift_public_ip=10.90.2.210 openshift_hostname=ose-node2.cloud-cafe.in openshift_public_hostname=ose-node2.cloud-cafe.in openshift_schedulable=True

[masters]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX

[nfs]
ose-hub  openshift_ip=10.90.1.209 openshift_public_ip=10.90.1.209 openshift_hostname=ose-hub.cloud-cafe.in openshift_public_hostname=ose-hub.cloud-cafe.in

[etcd]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX
[root@ose-master ~]# oc get po --all-namespaces -o wide
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE       IP               NODE
default       docker-registry-1-gzq7c                     1/1       Running   2          3h        10.128.157.168   ose-hub.cloud-cafe.in
default       registry-console-1-w9lkc                    1/1       Running   4          2d        10.130.134.197   ose-node1.cloud-cafe.in
default       router-1-k7h18                              1/1       Running   3          23h       10.90.1.209      ose-hub.cloud-cafe.in
kube-system   calico-policy-controller-4072784145-h60v8   1/1       Running   6          3d        10.90.1.208      ose-master.cloud-cafe.in
prometheus    alertmanager-1-50kf3                        1/1       Running   3          23h       10.128.157.169   ose-hub.cloud-cafe.in
prometheus    node-exporter-3d1nn                         1/1       Running   3          1d        10.90.2.211      ose-node2.cloud-cafe.in
prometheus    node-exporter-64x5w                         1/1       Running   4          1d        10.90.1.209      ose-hub.cloud-cafe.in
prometheus    node-exporter-jv283                         1/1       Running   4          1d        10.90.1.208      ose-master.cloud-cafe.in
prometheus    node-exporter-wlxq6                         1/1       Running   3          1d        10.90.2.210      ose-node1.cloud-cafe.in

POD to Service no response

/alertmanager # wget http://172.30.200.102:9093
Connecting to 172.30.200.102:9093 (172.30.200.102:9093)
^C
/alertmanager # ping 172.30.200.102
PING 172.30.200.102 (172.30.200.102): 56 data bytes

^C^C
--- 172.30.200.102 ping statistics ---
18 packets transmitted, 0 packets received, 100% packet loss
/alertmanager # ping 172.30.0.1
PING 172.30.0.1 (172.30.0.1): 56 data bytes
^C
--- 172.30.0.1 ping statistics ---
25 packets transmitted, 0 packets received, 100% packet loss

POD to POD response OK

[root@ose-master ~]# oc rsh po/alertmanager-1-50kf3 sh
/alertmanager # ping 10.90.1.208
PING 10.90.1.208 (10.90.1.208): 56 data bytes
64 bytes from 10.90.1.208: seq=0 ttl=63 time=0.373 ms
64 bytes from 10.90.1.208: seq=1 ttl=63 time=0.519 ms
64 bytes from 10.90.1.208: seq=2 ttl=63 time=0.564 ms
^C
--- 10.90.1.208 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.373/0.485/0.564 ms
/alertmanager # ping 10.128.157.168
PING 10.128.157.168 (10.128.157.168): 56 data bytes
64 bytes from 10.128.157.168: seq=0 ttl=63 time=0.075 ms
64 bytes from 10.128.157.168: seq=1 ttl=63 time=0.066 ms
64 bytes from 10.128.157.168: seq=2 ttl=63 time=0.068 ms
64 bytes from 10.128.157.168: seq=3 ttl=63 time=0.070 ms
64 bytes from 10.128.157.168: seq=4 ttl=63 time=0.073 ms
^C
--- 10.128.157.168 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.066/0.070/0.075 ms
/alertmanager #
[root@ose-master ~]# calicoctl get profiles -o wide
NAME                      TAGS
k8s_ns.default            k8s_ns.default
k8s_ns.kube-public        k8s_ns.kube-public
k8s_ns.kube-system        k8s_ns.kube-system
k8s_ns.logging            k8s_ns.logging
k8s_ns.management-infra   k8s_ns.management-infra
k8s_ns.openshift          k8s_ns.openshift
k8s_ns.openshift-infra    k8s_ns.openshift-infra
k8s_ns.prometheus         k8s_ns.prometheus

[root@ose-master ~]# calicoctl get policy
NAME

[root@ose-master ~]#
@fasaxc
Copy link
Member

fasaxc commented Sep 1, 2017

I'm not too familiar with OpenShift but in vanilla k8s, I'd expect to see kube-proxy running. kube-proxy is the pod that implements service VIP on top of Calico's pod networking.

@prasenforu
Copy link
Author

Initially I was facing different issue, that was resolved. To me its a calico issue. Let me link that also. openshift/openshift-ansible#5235

@tmjd
Copy link
Member

tmjd commented Sep 1, 2017

Calico does not configure services, it provides IP addresses for containers and policy if it is configured. kube-proxy is responsible for setting up services and the proper rules for redirecting traffic to the appropriate IP address. Since Pod-to-Pod traffic is working and you have no policy a likely candidate is kube-proxy or access to the service itself. kube-proxy does need to be configured with the Calico IP Pool CIDR, I'm not too familiar with OpenShift but I would assume that configuration is handled automatically.

I had a couple thoughts from the issue you linked:

  • The logs attached show that the containers are being stopped, could you locate logs when the containers are being created. There may be more helpful errors at creation time if it is a CNI/Calico issue.
  • You mentioned in the issue that your previous problem was solved with changes to a firewall. Could the firewall be dropping the packets destined for your service addresses?

@prasenforu
Copy link
Author

Though the issue I linked, I given system logs from where POD running.

In firewall, I allow etcd port which resolve previous issue.

@prasenforu
Copy link
Author

Errors was CNI failed to retrieve network namespace path

@tmjd
Copy link
Member

tmjd commented Sep 1, 2017

The logs show the tear down having a problem and the root of the problem is Error: No such container, which Calico has no control of the container existing or not, that is handled by K8s. Actually the network namespace is the responsibility of K8s also, so I think the cause of that error is outside of Calico's control.

@tmjd
Copy link
Member

tmjd commented Sep 1, 2017

You should try taking a look at the iptables rules (I'd suggest sudo iptables-save) and look to see that there are rules that match on packet destination for your service IP (172.30.200.102) and direct the traffic to the appropriate pod IPs. This is of course after verifying that kube-proxy is running like fasaxc was concerned with.

@prasenforu
Copy link
Author

Hmmm, but in starting I mentioned all containers are running by executing command oc get po --all-namespaces -o wide

@prasenforu
Copy link
Author

By default openshift support proxy with iptable based.

openshift_node_proxy_mode=iptables

@prasenforu
Copy link
Author

Not sure but if it is related any --hairpin-mode ?

@prasenforu
Copy link
Author

prasenforu commented Sep 6, 2017

May be it will help you.

@fasaxc
@tmjd

POD to Service no response

[root@ose-master ~]# oc get po,svc,route -o wide
NAME                         READY     STATUS    RESTARTS   AGE       IP                NODE
po/docker-registry-1-6lvmw   1/1       Running   0          16m       192.168.157.137   ose-hub.cloud-cafe.in
po/mongo                     1/1       Running   1          1d        192.168.129.66    ose-node2.cloud-cafe.in
po/myemp-1-5hl17             1/1       Running   1          1d        192.168.134.194   ose-node1.cloud-cafe.in
po/router-1-dxjhm            2/2       Running   0          16m       10.90.1.209       ose-hub.cloud-cafe.in

NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE       SELECTOR
svc/docker-registry   172.30.194.139   <none>        5000/TCP                  1d        docker-registry=default
svc/kubernetes        172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     1d        <none>
svc/mongo             172.30.27.0      <none>        27017/TCP                 1d        context=docker-pkar,name=mongo
svc/myemp             172.30.200.91    <none>        80/TCP                    1d        context=docker-pkar,deploymentconfig=myemp,name=myemp
svc/router            172.30.199.242   <none>        80/TCP,443/TCP,1936/TCP   1d        router=router

NAME                     HOST/PORT                                         PATH      SERVICES          PORT       TERMINATION   WILDCARD
routes/docker-registry   docker-registry-default.cloudapps.cloud-cafe.in             docker-registry   5000-tcp                 None
routes/myemp             sampleapp.cloudapps.cloud-cafe.in                           myemp             80-tcp                   None

Able to connect from router pod to Kubernetes, router & docker registry (which is running in same host)

sh-4.2$ curl -v telnet://172.30.0.1:443
* About to connect() to 172.30.0.1 port 443 (#0)
*   Trying 172.30.0.1...
* Connected to 172.30.0.1 (172.30.0.1) port 443 (#0)
^C

sh-4.2$ curl -v telnet://172.30.0.1:53
* About to connect() to 172.30.0.1 port 53 (#0)
*   Trying 172.30.0.1...
* Connected to 172.30.0.1 (172.30.0.1) port 53 (#0)
^C

sh-4.2$ curl -v telnet://172.30.194.139:5000
* About to connect() to 172.30.194.139 port 5000 (#0)
*   Trying 172.30.194.139...
* Connected to 172.30.194.139 (172.30.194.139) port 5000 (#0)
^C

sh-4.2$ curl -v telnet://172.30.199.242:80
* About to connect() to 172.30.199.242 port 80 (#0)
*   Trying 172.30.199.242...
* Connected to 172.30.199.242 (172.30.199.242) port 80 (#0)
^C

But not able to connect from router pod to mongo or myemp (which is running in different host)

sh-4.2$ curl -v telnet://172.30.27.0:27017
* About to connect() to 172.30.27.0 port 27017 (#0)
*   Trying 172.30.27.0...
^C

#### Then switch to myemp pod, from there its able to connect itself service but not mongo (which is running in different host)

[root@ose-master ~]# oc rsh po/myemp-1-5hl17 bash
I have no name!@myemp-1-5hl17:/opt/sample/Employee$  curl -v telnet://172.30.200.91:80
* Rebuilt URL to: telnet://172.30.200.91:80/
* Hostname was NOT found in DNS cache
*   Trying 172.30.200.91...
* Connected to 172.30.200.91 (172.30.200.91) port 80 (#0)
^C

I have no name!@myemp-1-5hl17:/opt/sample/Employee$ curl -v telnet://172.30.27.0:27017
* Rebuilt URL to: telnet://172.30.27.0:27017/
* Hostname was NOT found in DNS cache
*   Trying 172.30.27.0...
^C

[root@ose-master ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 10.90.1.209  | node-to-node mesh | up    | 13:48:21 | Established |
| 10.90.2.210  | node-to-node mesh | up    | 13:48:23 | Established |
| 10.90.2.211  | node-to-node mesh | up    | 13:48:22 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

IP route output
[root@ose-hub ~]# ifconfig
cali0303fa1ae2a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::802:7fff:fee5:1cec  prefixlen 64  scopeid 0x20<link>
        ether 0a:02:7f:e5:1c:ec  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        ether 02:42:fc:3e:73:0e  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001
        inet 10.90.1.209  netmask 255.255.255.0  broadcast 10.90.1.255
        inet6 fe80::7e:feff:fec5:483d  prefixlen 64  scopeid 0x20<link>
        ether 02:7e:fe:c5:48:3d  txqueuelen 1000  (Ethernet)
        RX packets 16653  bytes 2612598 (2.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 17037  bytes 1424754 (1.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 5482  bytes 533947 (521.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5482  bytes 533947 (521.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1440
        inet 192.168.157.128  netmask 255.255.255.255
        tunnel   txqueuelen 1  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 340  bytes 20400 (19.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ose-hub ~]# ip route
default via 10.90.1.1 dev eth0  proto static  metric 100
10.90.1.0/24 dev eth0  proto kernel  scope link  src 10.90.1.209  metric 100
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1
192.168.103.64/26 via 10.90.1.208 dev tunl0  proto bird onlink
192.168.129.64/26 via 10.90.2.211 dev tunl0  proto bird onlink
192.168.134.192/26 via 10.90.2.210 dev tunl0  proto bird onlink
blackhole 192.168.157.128/26  proto bird
192.168.157.139 dev cali0303fa1ae2a  scope link

Basically POD to POD NO communication if pod in different host

@ozdanborne
Copy link
Member

ozdanborne commented Sep 6, 2017

@prasenforu are you deploying on public cloud? By default Calico for OpenShift enables IPIP encapsulation. You may need to either:

@ozdanborne ozdanborne reopened this Sep 6, 2017
@prasenforu
Copy link
Author

prasenforu commented Sep 7, 2017

@ozdanborne

Yes, I am running in AWS & my AWS architecture as follows.

image

Initially I allowed 179 port, that is you can see calicoctl node status command with INFO as a Established
but did not know about allow IPIP traffic, then I did that also. But no luck.
So allow IPIP traffic not worked for me.

Then tried with disable ipip mode and disable src/dest checks but no luck.

As per both document, I think EC2 hosts disable src/dest checks is required for both case.

All EC2 hosts src/dest checks Disabled.

Right now setup is with disable ipip mode and disable src/dest checks and following some calicoctl command output.

[root@ose-master ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+---------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |  INFO   |
+--------------+-------------------+-------+----------+---------+
| 10.90.1.209  | node-to-node mesh | start | 04:27:50 | Connect |
| 10.90.2.210  | node-to-node mesh | start | 04:27:50 | Connect |
| 10.90.2.211  | node-to-node mesh | start | 04:27:50 | Connect |
+--------------+-------------------+-------+----------+---------+

IPv6 BGP status
No IPv6 peers found.

[root@ose-master ~]# calicoctl get ipPool -o yaml
- apiVersion: v1
  kind: ipPool
  metadata:
    cidr: 192.168.0.0/16
  spec:
    ipip:
      enabled: true
      mode: always
    nat-outgoing: true

[root@ose-master ~]# calicoctl get nodes --output=wide
NAME                       ASN       IPV4             IPV6
ose-hub.cloud-cafe.in      (64512)   10.90.1.209/24
ose-master.cloud-cafe.in   (64512)   10.90.1.208/24
ose-node1.cloud-cafe.in    (64512)   10.90.2.210/24
ose-node2.cloud-cafe.in    (64512)   10.90.2.211/24

[root@ose-master ~]# oc rsh po/router-1-dxjhm sh
Defaulting container name to router.
sh-4.2$ curl -v telnet://172.30.27.0:27017
* About to connect() to 172.30.27.0 port 27017 (#0)
*   Trying 172.30.27.0...
^C
sh-4.2$ ping 192.168.134.198
PING 192.168.134.198 (192.168.134.198) 56(84) bytes of data.
^C
--- 192.168.134.198 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms

Additionally I am getting getting error in hub host.

Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.463063    2448 docker_sandbox.go:205] Failed to stop sandbox "0766f6ae89d4af5b165f0b2ccc651057a0eb748492550617f126fce9dff33e6f": Error response from daemon: {"message":"No such container: 0766f6ae89d4af5b165f0b2ccc651057a0eb748492550617f126fce9dff33e6f"}
Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.463121    2448 remote_runtime.go:109] StopPodSandbox "0766f6ae89d4af5b165f0b2ccc651057a0eb748492550617f126fce9dff33e6f" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "router-1-dxjhm_default" network: CNI failed to retrieve network namespace path: Error: No such container: 0766f6ae89d4af5b165f0b2ccc651057a0eb748492550617f126fce9dff33e6f
Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.463137    2448 kuberuntime_gc.go:138] Failed to stop sandbox "0766f6ae89d4af5b165f0b2ccc651057a0eb748492550617f126fce9dff33e6f" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "router-1-dxjhm_default" network: CNI failed to retrieve network namespace path: Error: No such container: 0766f6ae89d4af5b165f0b2ccc651057a0eb748492550617f126fce9dff33e6f
Sep  8 00:42:00 ose-hub dockerd-current: time="2017-09-08T00:42:00.463413424Z" level=error msg="Handler for GET /v1.24/containers/3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79/json returned error: No such container: 3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79"
Sep  8 00:42:00 ose-hub dockerd-current: time="2017-09-08T00:42:00.463778050Z" level=error msg="Handler for GET /v1.24/containers/3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79/json returned error: No such container: 3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79"
Sep  8 00:42:00 ose-hub dockerd-current: time="2017-09-08T00:42:00.464113730Z" level=info msg="{Action=stop, LoginUID=4294967295, PID=2448}"
Sep  8 00:42:00 ose-hub dockerd-current: time="2017-09-08T00:42:00.464212678Z" level=error msg="Handler for POST /v1.24/containers/3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79/stop returned error: No such container: 3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79"
Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.464320    2448 docker_sandbox.go:205] Failed to stop sandbox "3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79": Error response from daemon: {"message":"No such container: 3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79"}
Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.464378    2448 remote_runtime.go:109] StopPodSandbox "3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "docker-registry-1-nzn3w_default" network: CNI failed to retrieve network namespace path: Error: No such container: 3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79
Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.464389    2448 kuberuntime_gc.go:138] Failed to stop sandbox "3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "docker-registry-1-nzn3w_default" network: CNI failed to retrieve network namespace path: Error: No such container: 3dbd45a729a9a71648547c7241991e7c3ff02ea580836772edb952d3d747fe79
Sep  8 00:42:00 ose-hub dockerd-current: time="2017-09-08T00:42:00.464644959Z" level=error msg="Handler for GET /v1.24/containers/b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07/json returned error: No such container: b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07"
Sep  8 00:42:00 ose-hub dockerd-current: time="2017-09-08T00:42:00.465021583Z" level=error msg="Handler for GET /v1.24/containers/b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07/json returned error: No such container: b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07"
Sep  8 00:42:00 ose-hub dockerd-current: time="2017-09-08T00:42:00.465359602Z" level=info msg="{Action=stop, LoginUID=4294967295, PID=2448}"
Sep  8 00:42:00 ose-hub dockerd-current: time="2017-09-08T00:42:00.465456279Z" level=error msg="Handler for POST /v1.24/containers/b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07/stop returned error: No such container: b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07"
Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.465560    2448 docker_sandbox.go:205] Failed to stop sandbox "b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07": Error response from daemon: {"message":"No such container: b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07"}
Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.465621    2448 remote_runtime.go:109] StopPodSandbox "b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "router-1-zbv2k_default" network: CNI failed to retrieve network namespace path: Error: No such container: b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07
Sep  8 00:42:00 ose-hub origin-node: E0908 00:42:00.465632    2448 kuberuntime_gc.go:138] Failed to stop sandbox "b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "router-1-zbv2k_default" network: CNI failed to retrieve network namespace path: Error: No such container: b8f7145244c44c0b683f0ff66ea6e244f6e56c181661c40eff0a02290ad4de07

[root@ose-master ~]# oc get all
NAME                 REVISION   DESIRED   CURRENT   TRIGGERED BY
dc/docker-registry   1          1         1         config
dc/myemp             1          1         1         config
dc/router            1          1         1         config

NAME                   DESIRED   CURRENT   READY     AGE
rc/docker-registry-1   1         1         1         5d
rc/myemp-1             1         1         1         5d
rc/router-1            1         1         1         5d

NAME                     HOST/PORT                                         PATH      SERVICES          PORT       TERMINATION   WILDCARD
routes/docker-registry   docker-registry-default.cloudapps.cloud-cafe.in             docker-registry   5000-tcp                 None
routes/myemp             sampleapp.cloudapps.cloud-cafe.in                           myemp             80-tcp                   None

NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
svc/docker-registry   172.30.194.139   <none>        5000/TCP                  5d
svc/kubernetes        172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     5d
svc/mongo             172.30.27.0      <none>        27017/TCP                 5d
svc/myemp             172.30.200.91    <none>        80/TCP                    5d
svc/router            172.30.199.242   <none>        80/TCP,443/TCP,1936/TCP   5d

NAME                         READY     STATUS    RESTARTS   AGE
po/docker-registry-1-6lvmw   1/1       Running   8          4d
po/mongo                     1/1       Running   9          5d
po/myemp-1-5hl17             1/1       Running   9          5d
po/router-1-dxjhm            2/2       Running   16         4d
[root@ose-master ~]#

@tmjd
Copy link
Member

tmjd commented Sep 8, 2017

You should have your AWS security groups configured as listed here https://docs.projectcalico.org/v2.5/reference/public-cloud/aws#configure-security-groups
until you have a working setup and then once you have a working system you can try changing things if you don't care for those allowances.

For the following two reasons I believe you need the recommended security group settings and if you already have those please double check the settings:

  • The last output of calicoctl node status you included shows that the node-to-node mesh is not working as it should list STATE as up and INFO should be Established for each node.

  • You also said

    Right now setup is with disable ipip mode and disable src/dest checks and following some calicoctl command output.

    but your IP Pool shows that ipip is enabled in Calico, I would recommend leaving IPIP enabled (until you have a working system)

@prasenforu
Copy link
Author

I tried with both
allow IPIP traffic
&
disable ipip mode and disable src/dest checks

but no luck

Sorry for last statement "disable ipip" basically it enabled if saw the output of calicoctl get ipPool -o

I disabled EC2 src/dest checks.

@tmjd
Copy link
Member

tmjd commented Sep 8, 2017

I think you should do both allow IPIP traffic and disable src/dest checks until you have a working system.

Independent of those options though it also looks like your node-to-node mesh is not working and you need to figure out the problem. I suggest:

  • ensuring the security groups are configured
  • check that each host is listening on port 179 (netstat -al | grep -i bgp)
  • try to connect in both directions between all the nodes (including master) on port 179, if you do nc -v <ip-address> 179 and you should see Connection to ... succeeded
  • If any of those are not able to connect you need to figure out where the BGP traffic is being dropped and you can use tcpdump to figure that out

@prasenforu
Copy link
Author

OK, let me try.

Quick question do I need ipip enable?

@kprabhak
Copy link

kprabhak commented Sep 8, 2017

@prasenforu If you'd like, happy to help with some real time troubleshooting together with you via zoom or hangouts today - let me know.

@prasenforu
Copy link
Author

@kprabhak

Thanks Karthik.

Unfortunately I am in IST zone. Can you please setup web call (zoom) on Monday (11 sep).

Also let me know the good time.

@prasenforu
Copy link
Author

prasenforu commented Sep 11, 2017

Today I tried to install fresh by using git clone openshift-ansible with EC2 src/dest checks disabled.
And added security group as per document in https://docs.projectcalico.org/v2.5/reference/public-cloud/aws

image

Noticed pod ip range change.

[root@ose-master ~]# oc get all -o wide
NAME                  DOCKER REPO                                                 TAGS      UPDATED
is/registry-console   docker-registry.default.svc:5000/default/registry-console   latest    About an hour ago

NAME                  REVISION   DESIRED   CURRENT   TRIGGERED BY
dc/docker-registry    1          1         1         config
dc/myemp              1          1         1         config
dc/registry-console   1          1         1         config
dc/router             1          1         1         config

NAME                    DESIRED   CURRENT   READY     AGE       CONTAINER(S)              IMAGE(S)                                                              SELECTOR
rc/docker-registry-1    1         1         1         53m       registry                  openshift/origin-docker-registry:v3.6.0                               deployment=docker-registry-1,deploymentconfig=docker-registry,docker-registry=default
rc/myemp-1              1         1         1         37m       myemp-dc-pod              prasenforu/employee                                                   deployment=myemp-1,deploymentconfig=myemp,name=myemp
rc/registry-console-1   1         1         1         53m       registry-console          cockpit/kubernetes:latest                                             deployment=registry-console-1,deploymentconfig=registry-console,name=registry-console
rc/router-1             1         1         1         40m       router,metrics-exporter   openshift/origin-haproxy-router:v3.6.0,prom/haproxy-exporter:v0.7.1   deployment=router-1,deploymentconfig=router,router=router

NAME                      HOST/PORT                                          PATH      SERVICES           PORT      TERMINATION   WILDCARD
routes/docker-registry    docker-registry-default.cloudapps.cloud-cafe.in              docker-registry    <all>     passthrough   None
routes/myemp              sampleapp.cloudapps.cloud-cafe.in                            myemp              80-tcp                  None
routes/registry-console   registry-console-default.cloudapps.cloud-cafe.in             registry-console   <all>     passthrough   None

NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE       SELECTOR
svc/docker-registry    172.30.65.8      <none>        5000/TCP                  54m       docker-registry=default
svc/kubernetes         172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     2h        <none>
svc/mongo              172.30.223.99    <none>        27017/TCP                 37m       context=docker-pkar,name=mongo
svc/myemp              172.30.200.91    <none>        80/TCP                    37m       context=docker-pkar,deploymentconfig=myemp,name=myemp
svc/registry-console   172.30.226.103   <none>        9000/TCP                  53m       name=registry-console
svc/router             172.30.21.118    <none>        80/TCP,443/TCP,1936/TCP   40m       router=router

NAME                          READY     STATUS    RESTARTS   AGE       IP               NODE
po/docker-registry-1-p4p0l    1/1       Running   0          52m       10.128.157.132   ose-hub.cloud-cafe.in
po/mongo                      1/1       Running   0          37m       10.130.134.193   ose-node1.cloud-cafe.in
po/myemp-1-rs9br              1/1       Running   0          37m       10.128.157.135   ose-hub.cloud-cafe.in
po/registry-console-1-x2pkt   1/1       Running   0          52m       10.129.177.193   ose-node2.cloud-cafe.in
po/router-1-67hsp             2/2       Running   0          40m       10.90.1.209      ose-hub.cloud-cafe.in
[root@ose-master ~]#

[root@ose-master ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 10.90.1.209  | node-to-node mesh | up    | 14:11:10 | Established |
| 10.90.2.210  | node-to-node mesh | up    | 14:11:27 | Established |
| 10.90.2.211  | node-to-node mesh | up    | 14:11:31 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

[root@ns1 test-n-demo]# for node in {ose-master,ose-hub,ose-node1,ose-node2}; do
> echo "Port 179 listening on $node" && \
> ssh $node " netstat -al | grep -i bgp"
> done
Port 179 listening on ose-master
tcp        0      0 0.0.0.0:bgp             0.0.0.0:*               LISTEN
tcp        0      0 ose-master.cloud-ca:bgp 10.90.1.209:58035       ESTABLISHED
tcp        0      0 ose-master.cloud-:43817 10.90.2.210:bgp         ESTABLISHED
tcp        0      0 ose-master.cloud-:38379 10.90.2.211:bgp         ESTABLISHED
Port 179 listening on ose-hub
tcp        0      0 0.0.0.0:bgp             0.0.0.0:*               LISTEN
tcp        0      0 ose-hub.cloud-caf:36925 10.90.2.210:bgp         ESTABLISHED
tcp        0      0 ose-hub.cloud-caf:58035 10.90.1.208:bgp         ESTABLISHED
tcp        0      0 ose-hub.cloud-caf:47192 10.90.2.211:bgp         ESTABLISHED
Port 179 listening on ose-node1
tcp        0      0 0.0.0.0:bgp             0.0.0.0:*               LISTEN
tcp        0      0 ose-node1.cloud-caf:bgp 10.90.1.209:36925       ESTABLISHED
tcp        0      0 ose-node1.cloud-caf:bgp 10.90.2.211:32876       ESTABLISHED
tcp        0      0 ose-node1.cloud-caf:bgp 10.90.1.208:43817       ESTABLISHED
Port 179 listening on ose-node2
tcp        0      0 0.0.0.0:bgp             0.0.0.0:*               LISTEN
tcp        0      0 ose-node2.cloud-caf:bgp 10.90.1.208:38379       ESTABLISHED
tcp        0      0 ose-node2.cloud-caf:bgp 10.90.1.209:47192       ESTABLISHED
tcp        0      0 ose-node2.cloud-c:32876 10.90.2.210:bgp         ESTABLISHED
[root@ns1 test-n-demo]#

Then tried with

[root@ose-master ~]# calicoctl create -f - << EOF
> apiVersion: v1
> kind: ipPool
> metadata:
>   cidr: 10.0.0.0/14
> spec:
>   ipip:
>     enabled: true
>   nat-outgoing: true
> EOF
Successfully created 1 'ipPool' resource(s)
[root@ose-master ~]# oc rsh po/router-1-67hsp sh
Defaulting container name to router.
sh-4.2$ ping 10.128.157.132
PING 10.128.157.132 (10.128.157.132) 56(84) bytes of data.
64 bytes from 10.128.157.132: icmp_seq=1 ttl=64 time=0.046 ms
64 bytes from 10.128.157.132: icmp_seq=2 ttl=64 time=0.060 ms
64 bytes from 10.128.157.132: icmp_seq=3 ttl=64 time=0.055 ms
^C
--- 10.128.157.132 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.046/0.053/0.060/0.010 ms
sh-4.2$ ping 10.130.134.193
PING 10.130.134.193 (10.130.134.193) 56(84) bytes of data.
^C
--- 10.130.134.193 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms

sh-4.2$

But no luck :(

Looks PODs are communicating in same subnet (same AZ) not with POD which resides other subnet (different AZ).

@kprabhak
Copy link

Could you check if you are using protocol 4 for IPIP or protocol 94? IIRC, AWS provides options for both in their security group configuration options

@prasenforu
Copy link
Author

prasenforu commented Sep 11, 2017

When I entered IPIP in protocol, I noticed its automatically taken IPIP (94)

image

@kprabhak
Copy link

kprabhak commented Sep 12, 2017

Can you add IP Protocol 4 to the security group list as well? I think Linux (and therefore Calico) use standard rfc2003 type IP-in-IP, which is IP protocol type 4.

Also, not sure if you're planning on using nodeports, or other kubernetes service types, if so, you might need to open up other ports as well in case of service redirects from one node to an endpoint in a different node.

BTW, if you have time for a zoom/hangout session now, and still want to do some real time troubleshooting together, let me know.

@kprabhak
Copy link

Sent you a zoom invite.

@prasenforu
Copy link
Author

[root@ose-master ~]# ip route show
default via 10.90.1.1 dev eth0 proto static metric 100
blackhole 10.0.103.64/26 proto bird
10.0.129.64/26 via 10.90.2.211 dev tunl0 proto bird onlink
10.0.134.192/26 via 10.90.2.210 dev tunl0 proto bird onlink
10.0.157.128/26 via 10.90.1.209 dev tunl0 proto bird onlink
10.90.1.0/24 dev eth0 proto kernel scope link src 10.90.1.208 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
[root@ose-master ~]#

[root@ose-master ~]# oc get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
docker-registry-1-p4p0l 1/1 Running 2 13h 10.128.157.138 ose-hub.cloud-cafe.in
mongo 1/1 Running 2 13h 10.130.134.195 ose-node1.cloud-cafe.in
myemp-1-rs9br 1/1 Running 2 13h 10.128.157.139 ose-hub.cloud-cafe.in
registry-console-1-x2pkt 1/1 Running 2 13h 10.129.177.195 ose-node2.cloud-cafe.in
router-1-67hsp 2/2 Running 4 13h 10.90.1.209 ose-hub.cloud-cafe.in

[root@ose-master ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.90.1.209 | node-to-node mesh | up | 02:52:59 | Established |
| 10.90.2.210 | node-to-node mesh | up | 02:53:08 | Established |
| 10.90.2.211 | node-to-node mesh | up | 02:53:19 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

[root@ose-master ~]# calicoctl get ippool -o yaml

  • apiVersion: v1
    kind: ipPool
    metadata:
    cidr: 10.0.0.0/16
    spec:
    ipip:
    enabled: true
    nat-outgoing: true

@kprabhak
Copy link

kprabhak commented Sep 12, 2017

@ozdanborne @tmjd As you can see from the output shared by @prasenforu , the (bird) routes on the nodes are 10.0.129.64/26, 10.0.134.192/26 and 10.0.157.128/26 and don't match the pod's IP addresses which are 10.129.177.195, 10.130.134.195, etc.

@tmjd
Copy link
Member

tmjd commented Sep 12, 2017

Is it possible there are 2 network configurations set up? And Calico is not being used?
Did you check calicoctl get wep? Do those endpoints show up in Calico?

@prasenforu
Copy link
Author

As discussed with Prabhakar, I have freshly install by adding following variables

calico_ipv4pool_ipip="cross-subnet"
calico_ipv4pool_cidr="192.168.0.0/16"

and my ansible host file as follows,

[OSEv3:children]
nodes
masters
etcd

[OSEv3:vars]
openshift_master_default_subdomain=cloudapps.cloud-cafe.in
ansible_ssh_user=root
deployment_type=origin
os_sdn_network_plugin_name=cni
openshift_use_calico=true
calico_ipv4pool_ipip="cross-subnet"
calico_ipv4pool_cidr="192.168.0.0/16"
openshift_use_openshift_sdn=false
openshift_disable_check=disk_availability,memory_availability,docker_storage
openshift_release=v3.6
openshift_image_tag=v3.6.0


# Comment the following to disable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/users.htpasswd'}]

[nodes]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX openshift_node_labels="{'region': 'infra'}" openshift_schedulable=False
ose-hub  openshift_ip=10.90.1.209 openshift_public_ip=10.90.1.209 openshift_hostname=ose-hub.cloud-cafe.in openshift_public_hostname=ose-hub.cloud-cafe.in openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True
ose-node1  openshift_ip=10.90.2.210 openshift_public_ip=10.90.2.210 openshift_hostname=ose-node1.cloud-cafe.in openshift_public_hostname=ose-node1.cloud-cafe.in openshift_schedulable=True
ose-node2  openshift_ip=10.90.2.211 openshift_public_ip=10.90.2.210 openshift_hostname=ose-node2.cloud-cafe.in openshift_public_hostname=ose-node2.cloud-cafe.in openshift_schedulable=True

[masters]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX

[etcd]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX

Below output after installation (by default came, NOT executed by me)

[root@ose-master ~]# calicoctl get ippool -o yaml
- apiVersion: v1
  kind: ipPool
  metadata:
    cidr: 10.128.0.0/14
  spec:
    ipip:
      enabled: true
      mode: cross-subnet
    nat-outgoing: true

My PODs running

[root@ose-master ~]# oc get pod -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP               NODE
docker-registry-1-tqgnn    1/1       Running   2          39m       10.128.157.136   ose-hub.cloud-cafe.in
mongo                      1/1       Running   0          4m        10.130.134.193   ose-node1.cloud-cafe.in
myemp-1-pvlqt              1/1       Running   0          4m        10.128.157.138   ose-hub.cloud-cafe.in
registry-console-1-3hwhj   1/1       Running   2          34m       10.129.177.195   ose-node2.cloud-cafe.in
router-1-qfdsg             2/2       Running   2          15m       10.90.1.209      ose-hub.cloud-cafe.in

IP route of each host

#### IP route show on ose-master

default via 10.90.1.1 dev eth0  proto static  metric 100
10.90.1.0/24 dev eth0  proto kernel  scope link  src 10.90.1.208  metric 100
10.128.157.128/26 via 10.90.1.209 dev eth0  proto bird
10.129.177.192/26 via 10.90.2.211 dev tunl0  proto bird onlink
10.130.134.192/26 via 10.90.2.210 dev tunl0  proto bird onlink
blackhole 10.131.94.0/26  proto bird
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1

#### IP route show on ose-hub

default via 10.90.1.1 dev eth0  proto static  metric 100
10.90.1.0/24 dev eth0  proto kernel  scope link  src 10.90.1.209  metric 100
blackhole 10.128.157.128/26  proto bird
10.128.157.136 dev calib3d49d77298  scope link
10.128.157.138 dev calib1a18accc83  scope link
10.129.177.192/26 via 10.90.2.211 dev tunl0  proto bird onlink
10.130.134.192/26 via 10.90.2.210 dev tunl0  proto bird onlink
10.131.94.0/26 via 10.90.1.208 dev eth0  proto bird
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1

#### IP route show on ose-node1

default via 10.90.2.1 dev eth0  proto static  metric 100
10.90.2.0/24 dev eth0  proto kernel  scope link  src 10.90.2.210  metric 100
10.128.157.128/26 via 10.90.1.209 dev tunl0  proto bird onlink
10.129.177.192/26 via 10.90.2.211 dev eth0  proto bird
blackhole 10.130.134.192/26  proto bird
10.130.134.193 dev cali7772039f34c  scope link
10.131.94.0/26 via 10.90.1.208 dev tunl0  proto bird onlink
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1

#### IP route show on ose-node2

default via 10.90.2.1 dev eth0  proto static  metric 100
10.90.2.0/24 dev eth0  proto kernel  scope link  src 10.90.2.211  metric 100
10.128.157.128/26 via 10.90.1.209 dev tunl0  proto bird onlink
blackhole 10.129.177.192/26  proto bird
10.129.177.195 dev cali9cd45fb1dac  scope link
10.130.134.192/26 via 10.90.2.210 dev eth0  proto bird
10.131.94.0/26 via 10.90.1.208 dev tunl0  proto bird onlink
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1
But no luck
[root@ose-master ~]# oc rsh router-1-qfdsg sh
Defaulting container name to router.
sh-4.2$ ping 10.130.134.193
PING 10.130.134.193 (10.130.134.193) 56(84) bytes of data.
^C
--- 10.130.134.193 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2003ms

sh-4.2$ ping 10.128.157.136
PING 10.128.157.136 (10.128.157.136) 56(84) bytes of data.
64 bytes from 10.128.157.136: icmp_seq=1 ttl=64 time=0.042 ms
64 bytes from 10.128.157.136: icmp_seq=2 ttl=64 time=0.046 ms
^C
--- 10.128.157.136 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.042/0.044/0.046/0.002 ms
sh-4.2$ exit
exit
[root@ose-master ~]# oc rsh mongo sh
# ping 10.129.177.195
PING 10.129.177.195 (10.129.177.195): 56 data bytes
^C--- 10.129.177.195 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
# ping ose-node2.cloud-cafe.in
PING ose-node2.cloud-cafe.in (10.90.2.211): 56 data bytes
64 bytes from 10.90.2.211: icmp_seq=0 ttl=63 time=0.595 ms
64 bytes from 10.90.2.211: icmp_seq=1 ttl=63 time=0.684 ms
64 bytes from 10.90.2.211: icmp_seq=2 ttl=63 time=0.775 ms
^C--- ose-node2.cloud-cafe.in ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.595/0.685/0.775/0.073 ms
#

Above I tried ping form router pod (running in HUB host) to mongo pod (running in NODE-1 host), result unsuccessful

Then tried ping form router pod (running in HUB host) to docker-registry pod (running in same host), result successful

Then tried ping form mongo pod (running in NODE-1 host) to registry-console pod (running in NODE-2 host), result unsuccessful NOTE - These two hosts (node1 & node2) are in same subnet

@prasenforu
Copy link
Author

[root@ose-master ~]# calicoctl get wep
NODE ORCHESTRATOR WORKLOAD NAME
ose-hub.cloud-cafe.in k8s default.docker-registry-1-tqgnn eth0
ose-hub.cloud-cafe.in k8s default.myemp-1-pvlqt eth0
ose-node1.cloud-cafe.in k8s default.mongo eth0
ose-node2.cloud-cafe.in k8s default.registry-console-1-3hwhj eth0

[root@ose-master ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.90.1.209 | node-to-node mesh | up | 14:49:54 | Established |
| 10.90.2.210 | node-to-node mesh | up | 14:49:57 | Established |
| 10.90.2.211 | node-to-node mesh | up | 14:50:06 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

@kprabhak
Copy link

@prasenforu Your Calico ippool (10.128.0.0/14) still does not match the pool you configure in /etc/ansible/hosts, i.e. calico_ipv4pool_cidr="192.168.0.0/16".

When you do a fresh install, are you simply rerunning ansible-playbook? As @tmjd points out, perhaps there is cruft left behind in etcd from previous installs on the same master node, and might need to be cleaned up.

@prasenforu
Copy link
Author

Everything was installed in a new fresh ec2 hosts.

Not sure from where its pulling ipool (10 cidr)

@prasenforu
Copy link
Author

Though its did not calico_ipv4pool_cidr="192.168.0.0/16" reflected in pod ip but whatever the ip taken by (10 cidr range) POD currently, do you find any miss match with ip routes of hosts.

@prasenforu
Copy link
Author

prasenforu commented Sep 13, 2017

After little search I found from where its pulling 10.128.0.0/14.

Because of recent change made on ansible file openshift-ansible/roles/calico/templates/calico.service.j2 with env openshift.master.sdn_cluster_network_cidr

Due to above parameter its uses default configuration, the cluster network 10.128.0.0/14 network.

I tried to edit openshift-ansible/roles/calico/templates/calico.service.j2 with OLD value env calico_ipv4pool_cidr (based on discussion with Prabhakar).

After changing above parameter I was facing below issue.

  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason          Message
  ---------     --------        -----   ----                            -------------   --------        ------          -------
  46s           46s             1       default-scheduler                               Normal          Scheduled       Successfully assigned router-1-deploy to ose-hub.cloud-cafe.in
  13s           13s             1       kubelet, ose-hub.cloud-cafe.in                  Warning         FailedSync      Error syncing pod
  12s           12s             1       kubelet, ose-hub.cloud-cafe.in                  Normal          SandboxChanged  Pod sandbox changed, it will be killed and re-created.

That is why calico_ipv4pool_cidr="192.168.0.0/16" not reflected.

@kprabhak
Copy link

@prasenforu Indeed, I had suggested backing out of that commit (i.e., replacing openshift.master.sdn_cluster_network_cidr with calico_ipv4pool_cidr) prior to redeploying.

Would suggest checking 'calicoctl get ippool -o yaml', 'oc get pods -o wide', 'ip addr show' and 'ip route show' to make sure that:

  • the correct (expected) IP pool is configured in Calico
  • Pods are being assigned IP's from that pool, and
  • routes being advertised between nodes via bird are for /26 blocks that include the IP's assigned to pods

@prasenforu
Copy link
Author

prasenforu commented Sep 13, 2017

Ansible hosts

[OSEv3:children]
nodes
masters
etcd

[OSEv3:vars]
openshift_master_default_subdomain=cloudapps.cloud-cafe.in
ansible_ssh_user=root
deployment_type=origin
os_sdn_network_plugin_name=cni
openshift_use_calico=true
calico_ipv4pool_ipip="cross-subnet"
calico_ipv4pool_cidr="10.5.0.0/16"
openshift_use_openshift_sdn=false
openshift_disable_check=disk_availability,memory_availability,docker_storage
openshift_release=v3.6
openshift_image_tag=v3.6.0


# Comment the following to disable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/users.htpasswd'}]

[nodes]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX openshift_node_labels="{'region': 'infra'}" openshift_schedulable=False
ose-hub  openshift_ip=10.90.1.209 openshift_public_ip=10.90.1.209 openshift_hostname=ose-hub.cloud-cafe.in openshift_public_hostname=ose-hub.cloud-cafe.in openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True
ose-node1  openshift_ip=10.90.2.210 openshift_public_ip=10.90.2.210 openshift_hostname=ose-node1.cloud-cafe.in openshift_public_hostname=ose-node1.cloud-cafe.in openshift_schedulable=True
ose-node2  openshift_ip=10.90.2.211 openshift_public_ip=10.90.2.210 openshift_hostname=ose-node2.cloud-cafe.in openshift_public_hostname=ose-node2.cloud-cafe.in openshift_schedulable=True

[masters]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX

[etcd]
ose-master  openshift_ip=10.90.1.208 openshift_public_ip=XXXXXXXXX openshift_hostname=ose-master.cloud-cafe.in openshift_public_hostname=XXXXXXXXX
Out put as requested ...
#### IP addr & route show on ose-master

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000
    link/ether 02:d3:a6:f7:3c:b4 brd ff:ff:ff:ff:ff:ff
    inet 10.90.1.208/24 brd 10.90.1.255 scope global dynamic eth0
       valid_lft 2361sec preferred_lft 2361sec
    inet6 fe80::d3:a6ff:fef7:3cb4/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:eb:d2:ea:ba brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.5.103.64/32 brd 10.5.103.64 scope global tunl0
       valid_lft forever preferred_lft forever

default via 10.90.1.1 dev eth0  proto static  metric 100
blackhole 10.5.103.64/26  proto bird
10.5.129.64/26 via 10.90.2.211 dev tunl0  proto bird onlink
10.5.134.192/26 via 10.90.2.210 dev tunl0  proto bird onlink
10.5.157.128/26 via 10.90.1.209 dev eth0  proto bird
10.90.1.0/24 dev eth0  proto kernel  scope link  src 10.90.1.208  metric 100
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1

#### IP addr & route show on ose-hub

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000
    link/ether 02:96:8d:89:3b:e8 brd ff:ff:ff:ff:ff:ff
    inet 10.90.1.209/24 brd 10.90.1.255 scope global dynamic eth0
       valid_lft 2369sec preferred_lft 2369sec
    inet6 fe80::96:8dff:fe89:3be8/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:68:f6:d2:c1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.5.157.128/32 brd 10.5.157.128 scope global tunl0
       valid_lft forever preferred_lft forever
6: cali726b3fe6e2d@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 62:5b:c2:a9:66:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::605b:c2ff:fea9:66f6/64 scope link
       valid_lft forever preferred_lft forever

default via 10.90.1.1 dev eth0  proto static  metric 100
10.5.103.64/26 via 10.90.1.208 dev eth0  proto bird
10.5.129.64/26 via 10.90.2.211 dev tunl0  proto bird onlink
10.5.134.192/26 via 10.90.2.210 dev tunl0  proto bird onlink
blackhole 10.5.157.128/26  proto bird
10.5.157.134 dev cali726b3fe6e2d  scope link
10.90.1.0/24 dev eth0  proto kernel  scope link  src 10.90.1.209  metric 100
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1

#### IP addr & route show on ose-node1

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000
    link/ether 06:80:ec:61:ac:fc brd ff:ff:ff:ff:ff:ff
    inet 10.90.2.210/24 brd 10.90.2.255 scope global dynamic eth0
       valid_lft 2364sec preferred_lft 2364sec
    inet6 fe80::480:ecff:fe61:acfc/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:a7:be:fe:a6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.5.134.192/32 brd 10.5.134.192 scope global tunl0
       valid_lft forever preferred_lft forever
8: cali96d04af35d5@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 6e:72:5f:0a:af:39 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::6c72:5fff:fe0a:af39/64 scope link
       valid_lft forever preferred_lft forever

default via 10.90.2.1 dev eth0  proto static  metric 100
10.5.103.64/26 via 10.90.1.208 dev tunl0  proto bird onlink
10.5.129.64/26 via 10.90.2.211 dev eth0  proto bird
blackhole 10.5.134.192/26  proto bird
10.5.134.197 dev cali96d04af35d5  scope link
10.5.157.128/26 via 10.90.1.209 dev tunl0  proto bird onlink
10.90.2.0/24 dev eth0  proto kernel  scope link  src 10.90.2.210  metric 100
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1

#### IP addr & route show on ose-node2

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000
    link/ether 06:1f:4d:d6:4c:e4 brd ff:ff:ff:ff:ff:ff
    inet 10.90.2.211/24 brd 10.90.2.255 scope global dynamic eth0
       valid_lft 2361sec preferred_lft 2361sec
    inet6 fe80::41f:4dff:fed6:4ce4/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:5d:55:df:d8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.5.129.64/32 brd 10.5.129.64 scope global tunl0
       valid_lft forever preferred_lft forever
7: cali7772039f34c@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 8a:0d:0d:be:7d:b0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::880d:dff:febe:7db0/64 scope link
       valid_lft forever preferred_lft forever
8: cali89e5fbd3c54@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 3a:d9:e2:96:ee:9d brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::38d9:e2ff:fe96:ee9d/64 scope link
       valid_lft forever preferred_lft forever

default via 10.90.2.1 dev eth0  proto static  metric 100
10.5.103.64/26 via 10.90.1.208 dev tunl0  proto bird onlink
blackhole 10.5.129.64/26  proto bird
10.5.129.67 dev cali7772039f34c  scope link
10.5.129.68 dev cali89e5fbd3c54  scope link
10.5.134.192/26 via 10.90.2.210 dev eth0  proto bird
10.5.157.128/26 via 10.90.1.209 dev tunl0  proto bird onlink
10.90.2.0/24 dev eth0  proto kernel  scope link  src 10.90.2.211  metric 100
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1

[root@ose-master ~]# calicoctl get ippool -o yaml
- apiVersion: v1
  kind: ipPool
  metadata:
    cidr: 10.5.0.0/16
  spec:
    ipip:
      enabled: true
      mode: cross-subnet
    nat-outgoing: true

[root@ose-master ~]# oc get pods -o wide
NAME                      READY     STATUS    RESTARTS   AGE       IP             NODE
docker-registry-1-g18sq   1/1       Running   1          29m       10.5.157.134   ose-hub.cloud-cafe.in
mongo                     1/1       Running   0          6m        10.5.129.67    ose-node2.cloud-cafe.in
myemp-1-2b8qz             1/1       Running   0          4m        10.5.134.197   ose-node1.cloud-cafe.in
myemp-1-8z2s9             1/1       Running   0          6m        10.5.129.68    ose-node2.cloud-cafe.in
router-1-81792            2/2       Running   0          20m       10.90.1.209    ose-hub.cloud-cafe.in


below some testing but result as earlier before, in same host pods able connect but not with other hsot.

[root@ose-master ~]# oc rsh myemp-1-2b8qz sh
$ ping 10.5.129.67
PING 10.5.129.67 (10.5.129.67): 56 data bytes
^C--- 10.5.129.67 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
$ ping 10.5.129.68
PING 10.5.129.68 (10.5.129.68): 56 data bytes
^C--- 10.5.129.68 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
$ exit
command terminated with exit code 1
[root@ose-master ~]# oc rsh myemp-1-8z2s9 sh
$ ping 10.5.134.197
PING 10.5.134.197 (10.5.134.197): 56 data bytes
^C--- 10.5.134.197 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
$ ping 10.5.129.67
PING 10.5.129.67 (10.5.129.67): 56 data bytes
64 bytes from 10.5.129.67: icmp_seq=0 ttl=63 time=0.106 ms
64 bytes from 10.5.129.67: icmp_seq=1 ttl=63 time=0.070 ms
^C--- 10.5.129.67 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.070/0.088/0.106/0.000 ms

@tmjd
Copy link
Member

tmjd commented Sep 13, 2017

From what I see it looks like everything is correct (except that traffic is not flowing).
You should try tcpdump -i any proto 4 or icmp on the host with the source pod and on the host with the destination pod and then attempt a cross host pod-to-pod ping? You should be able to see how the traffic is being sent, if it is getting encapsulated with IPIP, and then also if the packets are being received on the destination.

@prasenforu
Copy link
Author

Finally able to resolve the issue, tcpdump helps me lot.

Issue was in iptables, but not sure who was cluprit, is it Openshift or Calico because both created rules after default setup.

My default /etc/sysconfig/iptables

# Generated by iptables-save v1.4.21 on Wed Sep 13 10:13:07 2017
*nat
:PREROUTING ACCEPT [1:343]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [1:80]
:POSTROUTING ACCEPT [1:80]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Wed Sep 13 10:13:07 2017
# Generated by iptables-save v1.4.21 on Wed Sep 13 10:13:07 2017
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:OS_FIREWALL_ALLOW - [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j OS_FIREWALL_ALLOW
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A DOCKER-ISOLATION -j RETURN
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
-A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 4789 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 179 -j ACCEPT
COMMIT
# Completed on Wed Sep 13 10:13:07 2017

After iptable-save command, I got following output.

# Generated by iptables-save v1.4.21 on Thu Sep 14 07:16:21 2017
*raw
:PREROUTING ACCEPT [6299:2119791]
:OUTPUT ACCEPT [5935:505244]
:cali-OUTPUT - [0:0]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-to-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A cali-OUTPUT -m comment --comment "cali:WX1xZBEtmbS0Rhjs" -j MARK --set-xmark 0x0/0xf000000
-A cali-OUTPUT -m comment --comment "cali:iE00ZyllJNXfrlg_" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:Asois4hxp1rUxwJS" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:zatSDPVUhhPCk6Iy" -j MARK --set-xmark 0x0/0xf000000
-A cali-PREROUTING -i cali+ -m comment --comment "cali:-ES4EW0vxFmM81t8" -j MARK --set-xmark 0x4000000/0x4000000
-A cali-PREROUTING -m comment --comment "cali:VE1J3S_1t9q8GAsm" -m mark --mark 0x0/0x4000000 -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:VX8l4jKL9w89GXz5" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:73bZKoyDfOpFwC2T" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:QMFuWo6o-d9yOpNm" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Kup7QkrsdmfGX0uL" -m multiport --dports 4001 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:xYYr5PEqDf_Pqfkv" -m multiport --dports 7001 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:nbWBvu4OtudVY60Q" -m multiport --dports 53 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:UxFu5cDK5En6dT3Y" -m multiport --dports 67 -j ACCEPT
COMMIT
# Completed on Thu Sep 14 07:16:21 2017
# Generated by iptables-save v1.4.21 on Thu Sep 14 07:16:21 2017
*mangle
:PREROUTING ACCEPT [912:68806]
:INPUT ACCEPT [6299:2119791]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5935:505244]
:POSTROUTING ACCEPT [5935:505244]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-from-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A cali-PREROUTING -m comment --comment "cali:6BJqBjBC7crtA-7-" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:nE3PUa5RSRqBBvwx" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-PREROUTING -i cali+ -m comment --comment "cali:qgFofvzQe6yJPouQ" -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:o178eO5vvpj8e65z" -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:5TQcm-i_T8rVGEEa" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
COMMIT
# Completed on Thu Sep 14 07:16:21 2017
# Generated by iptables-save v1.4.21 on Thu Sep 14 07:16:21 2017
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [98:8192]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-NODEPORT-NON-LOCAL - [0:0]
:KUBE-SERVICES - [0:0]
:OS_FIREWALL_ALLOW - [0:0]
:cali-FORWARD - [0:0]
:cali-INPUT - [0:0]
:cali-OUTPUT - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-from-wl-dispatch - [0:0]
:cali-fw-cali96d04af35d5 - [0:0]
:cali-pri-k8s_ns.default - [0:0]
:cali-pro-k8s_ns.default - [0:0]
:cali-to-host-endpoint - [0:0]
:cali-to-wl-dispatch - [0:0]
:cali-tw-cali96d04af35d5 - [0:0]
:cali-wl-to-host - [0:0]
-A INPUT -m comment --comment "cali:Cz_u1IQiXIMmKD4c" -j cali-INPUT
-A INPUT -j KUBE-FIREWALL
-A INPUT -m comment --comment "Ensure that non-local NodePort traffic can flow" -j KUBE-NODEPORT-NON-LOCAL
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j OS_FIREWALL_ALLOW
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m comment --comment "cali:wUHhoiAYhphO9Mso" -j cali-FORWARD
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-SERVICES -d 172.30.181.78/32 -p tcp -m comment --comment "default/registry-console:registry-console has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
-A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 4789 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 179 -j ACCEPT
-A cali-FORWARD -i cali+ -m comment --comment "cali:X3vB2lGcBrfkYquC" -j cali-from-wl-dispatch
-A cali-FORWARD -o cali+ -m comment --comment "cali:UtJ9FnhBnFbyQMvU" -j cali-to-wl-dispatch
-A cali-FORWARD -i cali+ -m comment --comment "cali:Tt19HcSdA5YIGSsw" -j ACCEPT
-A cali-FORWARD -o cali+ -m comment --comment "cali:9LzfFCvnpC5_MYXm" -j ACCEPT
-A cali-FORWARD -m comment --comment "cali:7AofLLOqCM5j36rM" -j MARK --set-xmark 0x0/0xe000000
-A cali-FORWARD -m comment --comment "cali:QM1_joSl7tL76Az7" -m mark --mark 0x0/0x1000000 -j cali-from-host-endpoint
-A cali-FORWARD -m comment --comment "cali:C1QSog3bk0AykjAO" -j cali-to-host-endpoint
-A cali-FORWARD -m comment --comment "cali:DmFiPAmzcisqZcvo" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-INPUT -m comment --comment "cali:i7okJZpS8VxaJB3n" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-INPUT -p ipv4 -m comment --comment "cali:p8Wwvr6qydjU36AQ" -m comment --comment "Drop IPIP packets from non-Calico hosts" -m set ! --match-set cali4-all-hosts src -j DROP
-A cali-INPUT -i cali+ -m comment --comment "cali:QZT4Ptg57_76nGng" -g cali-wl-to-host
-A cali-INPUT -m comment --comment "cali:V0Veitpvpl5h1xwi" -j MARK --set-xmark 0x0/0xf000000
-A cali-INPUT -m comment --comment "cali:3R1g0cpvSoBlKzVr" -j cali-from-host-endpoint
-A cali-INPUT -m comment --comment "cali:efXx-pqD4s60WsDL" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-OUTPUT -m comment --comment "cali:YQSSJIsRcHjFbXaI" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-OUTPUT -o cali+ -m comment --comment "cali:KRjBsKsBcFBYKCEw" -j RETURN
-A cali-OUTPUT -m comment --comment "cali:3VKAQBcyUUW5kS_j" -j MARK --set-xmark 0x0/0xf000000
-A cali-OUTPUT -m comment --comment "cali:Z1mBCSH1XHM6qq0k" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:N0jyWt2RfBedKw3L" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:73bZKoyDfOpFwC2T" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:QMFuWo6o-d9yOpNm" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Kup7QkrsdmfGX0uL" -m multiport --dports 4001 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:xYYr5PEqDf_Pqfkv" -m multiport --dports 7001 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:nbWBvu4OtudVY60Q" -m multiport --dports 53 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:UxFu5cDK5En6dT3Y" -m multiport --dports 67 -j ACCEPT
-A cali-from-wl-dispatch -i cali96d04af35d5 -m comment --comment "cali:O83YMGYlIRBUOeHm" -g cali-fw-cali96d04af35d5
-A cali-from-wl-dispatch -m comment --comment "cali:LAQr12c8DCeFXo3-" -m comment --comment "Unknown interface" -j DROP
-A cali-fw-cali96d04af35d5 -m comment --comment "cali:FpEWFBejPXt_2XaN" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-cali96d04af35d5 -m comment --comment "cali:f7GPoLa_22go8NA_" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-cali96d04af35d5 -m comment --comment "cali:N0bLLaRhTkj-ew2d" -j MARK --set-xmark 0x0/0x1000000
-A cali-fw-cali96d04af35d5 -m comment --comment "cali:pVsLXZItp9potXtK" -j cali-pro-k8s_ns.default
-A cali-fw-cali96d04af35d5 -m comment --comment "cali:QmwfqMmGD980o0CX" -m comment --comment "Return if profile accepted" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-fw-cali96d04af35d5 -m comment --comment "cali:KQGYPj66KeBRXMPy" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-pri-k8s_ns.default -m comment --comment "cali:6MWuUqsVPzpSgE3L" -j MARK --set-xmark 0x1000000/0x1000000
-A cali-pri-k8s_ns.default -m comment --comment "cali:UGCdoOXoPRcONGv8" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-pro-k8s_ns.default -m comment --comment "cali:DTsGE7pFaKbRuEBg" -j MARK --set-xmark 0x1000000/0x1000000
-A cali-pro-k8s_ns.default -m comment --comment "cali:4bIByWXruQ1DMcbo" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-to-wl-dispatch -o cali96d04af35d5 -m comment --comment "cali:9ajXj6S8mKEGqObh" -g cali-tw-cali96d04af35d5
-A cali-to-wl-dispatch -m comment --comment "cali:BwG5w14LEE2fvKEe" -m comment --comment "Unknown interface" -j DROP
-A cali-tw-cali96d04af35d5 -m comment --comment "cali:EDdudRkZvRKzotmw" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-cali96d04af35d5 -m comment --comment "cali:UkuOREtRsIc8qumz" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-cali96d04af35d5 -m comment --comment "cali:JU0c278OaDGLX3ZD" -j MARK --set-xmark 0x0/0x1000000
-A cali-tw-cali96d04af35d5 -m comment --comment "cali:S-GWjzahFL0pbjAu" -j cali-pri-k8s_ns.default
-A cali-tw-cali96d04af35d5 -m comment --comment "cali:LLfjY26IBWAA2zKR" -m comment --comment "Return if profile accepted" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-tw-cali96d04af35d5 -m comment --comment "cali:za--rsUILPSrDZ1y" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-wl-to-host -p udp -m comment --comment "cali:aEOMPPLgak2S0Lxs" -m multiport --sports 68 -m multiport --dports 67 -j ACCEPT
-A cali-wl-to-host -p udp -m comment --comment "cali:SzR8ejPiuXtFMS8B" -m multiport --dports 53 -j ACCEPT
-A cali-wl-to-host -m comment --comment "cali:MEmlbCdco0Fefcrw" -j cali-from-wl-dispatch
-A cali-wl-to-host -m comment --comment "cali:LZBoXHDOlr3ok4R3" -m comment --comment "Configured DefaultEndpointToHostAction" -j ACCEPT
COMMIT
# Completed on Thu Sep 14 07:16:21 2017
# Generated by iptables-save v1.4.21 on Thu Sep 14 07:16:21 2017
*nat
:PREROUTING ACCEPT [5:400]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [8:552]
:POSTROUTING ACCEPT [8:552]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORT-CONTAINER - [0:0]
:KUBE-NODEPORT-HOST - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PORTALS-CONTAINER - [0:0]
:KUBE-PORTALS-HOST - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-555DHWD5ZVJXHG4E - [0:0]
:KUBE-SEP-67Y6BQHFE2S45MTC - [0:0]
:KUBE-SEP-BP4EEMUPJFM5A3CP - [0:0]
:KUBE-SEP-CRYUI7XXOZPXJJMC - [0:0]
:KUBE-SEP-G53Z2OGPTDOGR2IR - [0:0]
:KUBE-SEP-H34C5TAE5SU7ELOL - [0:0]
:KUBE-SEP-H5WMW2UGV5PG4REC - [0:0]
:KUBE-SEP-UZPTFJHKUWAQBDL6 - [0:0]
:KUBE-SEP-YIRBSGC7ZLYO7S7K - [0:0]
:KUBE-SEP-ZJWVVFMUOYVLQ4VH - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SVC-4JCRTMMYZAAYMIJ2 - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SVC-DEGCXZMVXZMJS2KL - [0:0]
:KUBE-SVC-ECTPRXTXBM34L34Q - [0:0]
:KUBE-SVC-G2OJTDIWIJ7HQ7MY - [0:0]
:KUBE-SVC-GQKZAHCS5DTMHUQ6 - [0:0]
:KUBE-SVC-IA2GPLGVBIABB7C7 - [0:0]
:KUBE-SVC-IKV43KYNCXS2W7KZ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:cali-OUTPUT - [0:0]
:cali-POSTROUTING - [0:0]
:cali-PREROUTING - [0:0]
:cali-fip-dnat - [0:0]
:cali-fip-snat - [0:0]
:cali-nat-outgoing - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-CONTAINER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-CONTAINER
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-HOST
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-HOST
-A POSTROUTING -m comment --comment "cali:O3lYWMrLQYEMJtB5" -j cali-POSTROUTING
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-555DHWD5ZVJXHG4E -s 10.5.134.202/32 -m comment --comment "default/myemp:80-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-555DHWD5ZVJXHG4E -p tcp -m comment --comment "default/myemp:80-tcp" -m tcp -j DNAT --to-destination 10.5.134.202:8888
-A KUBE-SEP-67Y6BQHFE2S45MTC -s 10.90.1.209/32 -m comment --comment "default/router:1936-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-67Y6BQHFE2S45MTC -p tcp -m comment --comment "default/router:1936-tcp" -m tcp -j DNAT --to-destination 10.90.1.209:1936
-A KUBE-SEP-BP4EEMUPJFM5A3CP -s 10.90.1.209/32 -m comment --comment "default/router:443-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-BP4EEMUPJFM5A3CP -p tcp -m comment --comment "default/router:443-tcp" -m tcp -j DNAT --to-destination 10.90.1.209:443
-A KUBE-SEP-CRYUI7XXOZPXJJMC -s 10.90.1.208/32 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-CRYUI7XXOZPXJJMC -p tcp -m comment --comment "default/kubernetes:dns-tcp" -m recent --set --name KUBE-SEP-CRYUI7XXOZPXJJMC --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.90.1.208:8053
-A KUBE-SEP-G53Z2OGPTDOGR2IR -s 10.90.1.208/32 -m comment --comment "default/kubernetes:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-G53Z2OGPTDOGR2IR -p udp -m comment --comment "default/kubernetes:dns" -m recent --set --name KUBE-SEP-G53Z2OGPTDOGR2IR --mask 255.255.255.255 --rsource -m udp -j DNAT --to-destination 10.90.1.208:8053
-A KUBE-SEP-H34C5TAE5SU7ELOL -s 10.5.157.143/32 -m comment --comment "default/docker-registry:5000-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-H34C5TAE5SU7ELOL -p tcp -m comment --comment "default/docker-registry:5000-tcp" -m recent --set --name KUBE-SEP-H34C5TAE5SU7ELOL --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.5.157.143:5000
-A KUBE-SEP-H5WMW2UGV5PG4REC -s 10.90.1.208/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-H5WMW2UGV5PG4REC -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-H5WMW2UGV5PG4REC --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.90.1.208:8443
-A KUBE-SEP-UZPTFJHKUWAQBDL6 -s 10.90.1.209/32 -m comment --comment "default/router:80-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-UZPTFJHKUWAQBDL6 -p tcp -m comment --comment "default/router:80-tcp" -m tcp -j DNAT --to-destination 10.90.1.209:80
-A KUBE-SEP-YIRBSGC7ZLYO7S7K -s 10.5.129.77/32 -m comment --comment "default/mongo:" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIRBSGC7ZLYO7S7K -p tcp -m comment --comment "default/mongo:" -m tcp -j DNAT --to-destination 10.5.129.77:27017
-A KUBE-SEP-ZJWVVFMUOYVLQ4VH -s 10.5.129.78/32 -m comment --comment "default/myemp:80-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZJWVVFMUOYVLQ4VH -p tcp -m comment --comment "default/myemp:80-tcp" -m tcp -j DNAT --to-destination 10.5.129.78:8888
-A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SERVICES -d 172.30.223.227/32 -p tcp -m comment --comment "default/mongo: cluster IP" -m tcp --dport 27017 -j KUBE-SVC-G2OJTDIWIJ7HQ7MY
-A KUBE-SERVICES -d 172.30.200.91/32 -p tcp -m comment --comment "default/myemp:80-tcp cluster IP" -m tcp --dport 80 -j KUBE-SVC-IA2GPLGVBIABB7C7
-A KUBE-SERVICES -d 172.30.181.78/32 -p tcp -m comment --comment "default/registry-console:registry-console cluster IP" -m tcp --dport 9000 -j KUBE-SVC-DEGCXZMVXZMJS2KL
-A KUBE-SERVICES -d 172.30.184.118/32 -p tcp -m comment --comment "default/router:1936-tcp cluster IP" -m tcp --dport 1936 -j KUBE-SVC-4JCRTMMYZAAYMIJ2
-A KUBE-SERVICES -d 172.30.99.162/32 -p tcp -m comment --comment "default/docker-registry:5000-tcp cluster IP" -m tcp --dport 5000 -j KUBE-SVC-ECTPRXTXBM34L34Q
-A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 172.30.0.1/32 -p udp -m comment --comment "default/kubernetes:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SERVICES -d 172.30.184.118/32 -p tcp -m comment --comment "default/router:80-tcp cluster IP" -m tcp --dport 80 -j KUBE-SVC-GQKZAHCS5DTMHUQ6
-A KUBE-SERVICES -d 172.30.184.118/32 -p tcp -m comment --comment "default/router:443-tcp cluster IP" -m tcp --dport 443 -j KUBE-SVC-IKV43KYNCXS2W7KZ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-G53Z2OGPTDOGR2IR --mask 255.255.255.255 --rsource -j KUBE-SEP-G53Z2OGPTDOGR2IR
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -j KUBE-SEP-G53Z2OGPTDOGR2IR
-A KUBE-SVC-4JCRTMMYZAAYMIJ2 -m comment --comment "default/router:1936-tcp" -j KUBE-SEP-67Y6BQHFE2S45MTC
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-CRYUI7XXOZPXJJMC --mask 255.255.255.255 --rsource -j KUBE-SEP-CRYUI7XXOZPXJJMC
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-SEP-CRYUI7XXOZPXJJMC
-A KUBE-SVC-ECTPRXTXBM34L34Q -m comment --comment "default/docker-registry:5000-tcp" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-H34C5TAE5SU7ELOL --mask 255.255.255.255 --rsource -j KUBE-SEP-H34C5TAE5SU7ELOL
-A KUBE-SVC-ECTPRXTXBM34L34Q -m comment --comment "default/docker-registry:5000-tcp" -j KUBE-SEP-H34C5TAE5SU7ELOL
-A KUBE-SVC-G2OJTDIWIJ7HQ7MY -m comment --comment "default/mongo:" -j KUBE-SEP-YIRBSGC7ZLYO7S7K
-A KUBE-SVC-GQKZAHCS5DTMHUQ6 -m comment --comment "default/router:80-tcp" -j KUBE-SEP-UZPTFJHKUWAQBDL6
-A KUBE-SVC-IA2GPLGVBIABB7C7 -m comment --comment "default/myemp:80-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-ZJWVVFMUOYVLQ4VH
-A KUBE-SVC-IA2GPLGVBIABB7C7 -m comment --comment "default/myemp:80-tcp" -j KUBE-SEP-555DHWD5ZVJXHG4E
-A KUBE-SVC-IKV43KYNCXS2W7KZ -m comment --comment "default/router:443-tcp" -j KUBE-SEP-BP4EEMUPJFM5A3CP
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-H5WMW2UGV5PG4REC --mask 255.255.255.255 --rsource -j KUBE-SEP-H5WMW2UGV5PG4REC
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-H5WMW2UGV5PG4REC
-A cali-OUTPUT -m comment --comment "cali:GBTAv2p5CwevEyJm" -j cali-fip-dnat
-A cali-POSTROUTING -m comment --comment "cali:Z-c7XtVd2Bq7s_hA" -j cali-fip-snat
-A cali-POSTROUTING -m comment --comment "cali:nYKhEzDlr11Jccal" -j cali-nat-outgoing
-A cali-POSTROUTING -o tunl0 -m comment --comment "cali:JHlpT-eSqR1TvyYm" -m addrtype ! --src-type LOCAL --limit-iface-out -m addrtype --src-type LOCAL -j MASQUERADE
-A cali-PREROUTING -m comment --comment "cali:r6XmIziWUJsdOK6Z" -j cali-fip-dnat
-A cali-nat-outgoing -m comment --comment "cali:Wd76s91357Uv7N3v" -m set --match-set cali4-masq-ipam-pools src -m set ! --match-set cali4-all-ipam-pools dst -j MASQUERADE
COMMIT
# Completed on Thu Sep 14 07:16:21 2017
Finally what I did on on node1 & node2
####### Accept default iptable policies

iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

then

####### Flush the NAT and mangle tables

iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X

Issue got resolved but noticed that pods able to communicate cross-subnet but not in same subnet.

then I edited (remove mode: cross-subnet in ippool yaml file.

Though I am not expert in iptable, please help me to find out culprit rule in iptable in save.

Thanks ALL for helping me to do proper analysis, also for your valuable time.

Not yet finished, still pending testing calico policy :)

👍

@prasenforu
Copy link
Author

Closing this issue & opening another issue in k8-policy. Not sure if new policy issue any relation with this solution taken to resolve.

Same type of scenario I tested in kubernetes with calico and it woks but facing challenges in OpenShift.

In that case difference between kubernetes and openshift is,

In OpenShift use Router & in Kubernetes it uses Ingress controler.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants