Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k3s on rhel 8 network/dns probleme and metrics not work #5013

Closed
chris93111 opened this issue Jan 24, 2022 · 33 comments
Closed

k3s on rhel 8 network/dns probleme and metrics not work #5013

chris93111 opened this issue Jan 24, 2022 · 33 comments

Comments

@chris93111
Copy link

chris93111 commented Jan 24, 2022

Hello I try to make k3s work in a redhat 8.4 but I encounter network or dns problems, I checked the modprob as well as sysctl but nothing happens maybe is flannel problem ?

firewalld and selinux disabled
nm-cloud-setup.service nm-cloud-setup.timer no present
k3s installed by script https://get.k3s.io

work fine in rhel 7.9

Environmental Info:
K3s Version:
k3s version v1.22.5+k3s1 (405bf79)
go version go1.16.10

Node(s) CPU architecture, OS, and Version:
Linux vldsocfg01 4.18.0-305.25.1.el8_4.x86_64 #1 SMP Mon Oct 18 14:34:11 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration:
2 master
3 node
2 node only front (traefik / metallb / haproxy )

Describe the bug:

pods crash with dns resolution probleme
coredns:

  [ERROR] plugin/errors: 2 7635134873774865456.7522827499224113179. HINFO: read udp 10.200.3.11:45684->XXXXXXX:53: i/o timeout

longhorn:

  time="2022-01-24T19:50:55Z" level=info msg="CSI Driver: driver.longhorn.io version: v1.2.2, manager URL http://longhorn-backend:9500/v1"
2022/01/24 19:50:03 [emerg] 1#1: host not found in upstream "longhorn-backend" in /etc/nginx/nginx.conf:32

metrics:

E0124 20:17:27.096421       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg03:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg03"
E0124 20:17:27.100536       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg01:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg01"
E0124 20:18:27.049233       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg01:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg01"
E0124 20:18:27.056477       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg02:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg02"
E0124 20:18:27.068495       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg03:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg03"
E0124 20:18:27.076854       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg01:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg01"
E0124 20:18:27.084260       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg01:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg01"
E0124 20:18:27.090960       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg02:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg02"
E0124 20:18:27.104001       1 scraper.go:139] "Failed to scrape node" err="Get \"https://vldsocfg02:10250/stats/summary?only_cpu_and_memory=true\": dial tcp: i/o timeout" node="vldsocfg02"

in logs k3s see metrics error

Jan 24 20:53:03 vldsocfg01 k3s[36279]: E0124 20:53:03.842079   36279 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
Jan 24 20:53:06 vldsocfg01 k3s[36279]: E0124 20:53:06.068125   36279 cri_stats_provider.go:372] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
Jan 24 20:53:06 vldsocfg01 k3s[36279]: E0124 20:53:06.068150   36279 kubelet.go:1343] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 24 20:53:06 vldsocfg01 k3s[36279]: E0124 20:53:06.097788   36279 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 24 20:51:45 vldsocfg01 k3s[33811]: E0124 20:51:45.975122   33811 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.201.36.96:443/apis/metrics.k8s.io/v1beta1: Get "https://10.201.36.96:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 24 20:51:46 vldsocfg01 k3s[33811]: E0124 20:51:46.976471   33811 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan 24 20:51:50 vldsocfg01 k3s[33811]: E0124 20:51:50.983597   33811 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.201.36.96:443/apis/metrics.k8s.io/v1beta1: Get "https://10.201.36.96:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.201.36.96:443: i/o timeout
Jan 24 20:51:51 vldsocfg01 k3s[33811]: E0124 20:51:51.984292   33811 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable

lsmod
Module Size Used by
xt_state 16384 0
veth 28672 0
nf_conntrack_netlink 49152 0
xt_recent 20480 6
xt_statistic 16384 21
xt_nat 16384 44
ip6t_MASQUERADE 16384 1
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nft_chain_nat 16384 8
ipt_MASQUERADE 16384 5
vxlan 65536 0
ip6_udp_tunnel 16384 1 vxlan
udp_tunnel 20480 1 vxlan
nfnetlink_log 20480 1
nft_limit 16384 1
ipt_REJECT 16384 5
nf_reject_ipv4 16384 1 ipt_REJECT
xt_limit 16384 0
xt_NFLOG 16384 1
xt_physdev 16384 2
xt_conntrack 16384 21
xt_mark 16384 25
xt_multiport 16384 4
xt_addrtype 16384 7
nft_counter 16384 329
xt_comment 16384 296
nft_compat 20480 550
nf_tables 172032 884 nft_compat,nft_counter,nft_chain_nat,nft_limit
ip_set 49152 0
nfnetlink 16384 5 nft_compat,nf_conntrack_netlink,nf_tables,ip_set,nfnetlink_log
iptable_nat 16384 0
nf_nat 45056 5 ip6t_MASQUERADE,ipt_MASQUERADE,xt_nat,nft_chain_nat,iptable_nat
nf_conntrack 172032 8 xt_conntrack,nf_nat,ip6t_MASQUERADE,xt_state,ipt_MASQUERADE,xt_nat,nf_conntrack_netlink,ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
cfg80211 835584 0
rfkill 28672 2 cfg80211
vsock_loopback 16384 0
vmw_vsock_virtio_transport_common 32768 1 vsock_loopback
vmw_vsock_vmci_transport 32768 1
vsock 45056 5 vmw_vsock_virtio_transport_common,vsock_loopback,vmw_vsock_vmci_transport
sunrpc 540672 1
intel_rapl_msr 16384 0
intel_rapl_common 24576 1 intel_rapl_msr
isst_if_mbox_msr 16384 0
isst_if_common 16384 1 isst_if_mbox_msr
nfit 65536 0
libnvdimm 192512 1 nfit
crct10dif_pclmul 16384 1
crc32_pclmul 16384 0
ghash_clmulni_intel 16384 0
rapl 20480 0
vmw_balloon 24576 0
joydev 24576 0
pcspkr 16384 0
vmw_vmci 86016 2 vmw_balloon,vmw_vsock_vmci_transport
i2c_piix4 24576 0
br_netfilter 24576 0
bridge 192512 1 br_netfilter
stp 16384 1 bridge
llc 16384 2 bridge,stp
overlay 135168 4
ip_tables 28672 1 iptable_nat
xfs 1515520 7
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
sr_mod 28672 0
cdrom 65536 1 sr_mod
sd_mod 53248 4
t10_pi 16384 1 sd_mod
sg 40960 0
ata_generic 16384 0
vmwgfx 368640 1
crc32c_intel 24576 1
drm_kms_helper 233472 1 vmwgfx
syscopyarea 16384 1 drm_kms_helper
sysfillrect 16384 1 drm_kms_helper
sysimgblt 16384 1 drm_kms_helper
fb_sys_fops 16384 1 drm_kms_helper
ata_piix 36864 0
ttm 114688 1 vmwgfx
serio_raw 16384 0
libata 270336 2 ata_piix,ata_generic
drm 569344 4 vmwgfx,drm_kms_helper,ttm
vmxnet3 65536 0
vmw_pvscsi 28672 8
dm_mod 151552 21
fuse 151552 1

iptables 1.8.4

in sysctl
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1

Steps To Reproduce:

  • Installed K3s:
  • rhel 8.4
  • multi node and master

tried to delete iptables package of real for use iptables from k3s but same result

UPDATE:

with params --flannel-backend=host-gw is work, but is good fix ?
ingress not work with host-gw because front node is not in same network of worker

Jan 25 14:07:43 vldsocfg02-front k3s[103276]: I0125 14:07:43.655113  103276 route_network.go:54] Watching for new subnet leases
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: I0125 14:07:43.655271  103276 route_network.go:93] Subnet added: 10.42.4.0/24 via x.y.6.8
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: I0125 14:07:43.655414  103276 route_network.go:93] Subnet added: 10.42.0.0/24 via x.y.6.3
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: E0125 14:07:43.655508  103276 route_network.go:168] Error adding route to {Ifindex: 2 Dst: 10.42.0.0/24 Src: <nil> Gw: x.y.6.3 Flags: [] Table: 0}
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: I0125 14:07:43.655532  103276 route_network.go:93] Subnet added: 10.42.1.0/24 via x.y.6.15
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: E0125 14:07:43.655599  103276 route_network.go:168] Error adding route to {Ifindex: 2 Dst: 10.42.1.0/24 Src: <nil> Gw: x.y.6.15 Flags: [] Table: 0}
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: I0125 14:07:43.655607  103276 route_network.go:93] Subnet added: 10.42.2.0/24 via x.y.6.13
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: E0125 14:07:43.655662  103276 route_network.go:168] Error adding route to {Ifindex: 2 Dst: 10.42.2.0/24 Src: <nil> Gw: x.y.6.13 Flags: [] Table: 0}
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: I0125 14:07:43.655673  103276 route_network.go:93] Subnet added: 10.42.3.0/24 via x.y.6.8
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: E0125 14:07:43.655730  103276 route_network.go:168] Error adding route to {Ifindex: 2 Dst: 10.42.3.0/24 Src: <nil> Gw: x.y.6.8 Flags: [] Table: 0}
Jan 25 14:07:43 vldsocfg02-front k3s[103276]: I0125 14:07:43.661130  103276 iptables.go:216] Some iptables rules are missing; deleting and recreating rules
@manuelbuil
Copy link
Contributor

Thanks for reporting this. I'm trying with rhel 8.4 and I see things working. Please make sure that the port 8472 is not being blocked.

Could you please verify the following:

  • There is a flannel.1 interface
  • There is a cni0 interface
  • The command nmap -p 8472 $remote_node_ip returns this when using different ips as $remote_node_ip:
PORT     STATE  SERVICE
8472/tcp filtered otv
  • Verify if pods in the same node can communicate
  • Verify if pods in different nodes can communicate

@chris93111
Copy link
Author

chris93111 commented Jan 26, 2022

Hi @manuelbuil , the interface is present, all port is open firewalld is disabled
downgrade kernel to 4.18.0-193.13.2.el8_2.x86_64 (rhel8.2) and all work
I see kernel in 8.3/8.4 merge module nf_conntrack_ipv4 in nf_conntrack
can you please tell me what modprobe are activate ?

other track rancher/windows#96

@manuelbuil
Copy link
Contributor

Hi @manuelbuil , the interface is present, all port is open firewalld is disabled downgrade kernel to 4.18.0-193.13.2.el8_2.x86_64 (rhel8.2) and all work I see kernel in 8.3/8.4 merge module nf_conntrack_ipv4 in nf_conntrack can you please tell me what modprobe are activate ?

other track rancher/windows#96

Are you using vSphere?

@manuelbuil
Copy link
Contributor

Hi @manuelbuil , the interface is present, all port is open firewalld is disabled downgrade kernel to 4.18.0-193.13.2.el8_2.x86_64 (rhel8.2) and all work I see kernel in 8.3/8.4 merge module nf_conntrack_ipv4 in nf_conntrack can you please tell me what modprobe are activate ?

other track rancher/windows#96

Can you check if the communication of pods in the same node work?

@manuelbuil
Copy link
Contributor

Hi @manuelbuil , the interface is present, all port is open firewalld is disabled downgrade kernel to 4.18.0-193.13.2.el8_2.x86_64 (rhel8.2) and all work I see kernel in 8.3/8.4 merge module nf_conntrack_ipv4 in nf_conntrack can you please tell me what modprobe are activate ?

other track rancher/windows#96

Here is the output of lsmod (I haven't modprobe anything):
https://gist.github.com/manuelbuil/cbd1672ebbdd0d803cc14e0d27b18220

@chris93111
Copy link
Author

@manuelbuil thanks for your reply, ok so is not modprob

yes i use vsphere

i can ping pods to anothers pods on same nodes and other
But the dns not work

kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached

command terminated with exit code 1

logs for coredns pods nothing
.:53
[INFO] plugin/reload: Running configuration MD5 = 442b35f70385f5c97f2491a0ce8a27f6
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5

tcpdump in 8.2

listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
14:28:56.658675 IP 10.42.6.80.58713 > 10.42.2.99.domain: 56835+ AAAA? longhorn-backend.svc.cluster.local. (52)
14:28:56.658706 IP 10.42.6.80.51830 > 10.42.2.99.domain: 33333+ A? longhorn-backend.svc.cluster.local. (52)
14:28:56.659357 IP 10.42.2.99.domain > 10.42.6.80.51830: 33333 NXDomain*- 0/1/0 (145)
14:28:56.659415 IP 10.42.2.99.domain > 10.42.6.80.58713: 56835 NXDomain*- 0/1/0 (145)
14:28:56.941565 IP 10.42.0.0.oma-ilp > 10.42.2.114.https: Flags [S], seq 44299721, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941604 IP 10.42.0.0.33577 > 10.42.2.114.https: Flags [S], seq 4191755746, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941653 IP 10.42.2.114.https > 10.42.0.0.oma-ilp: Flags [S.], seq 2358144253, ack 44299722, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941678 IP 10.42.2.114.https > 10.42.0.0.33577: Flags [S.], seq 1698444814, ack 4191755747, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941688 IP 10.42.0.0.6185 > 10.42.2.114.https: Flags [S], seq 3834005981, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941694 IP 10.42.0.0.45707 > 10.42.2.114.https: Flags [S], seq 4078422796, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941716 IP 10.42.2.114.https > 10.42.0.0.6185: Flags [S.], seq 1797845855, ack 3834005982, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941728 IP 10.42.2.114.https > 10.42.0.0.45707: Flags [S.], seq 1090209743, ack 4078422797, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941742 IP 10.42.0.0.13442 > 10.42.2.114.https: Flags [S], seq 3216199868, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:28:56.941765 IP 10.42.2.114.https > 10.42.0.0.13442: Flags [S.], seq 688542968, ack 3216199869, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:57.011373 IP 10.42.2.114.https > 10.42.0.0.adapt-sna: Flags [S.], seq 2542378797, ack 1509449448, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:57.011419 IP 10.42.2.114.https > 10.42.0.0.20859: Flags [S.], seq 1767829697, ack 2420705875, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:57.011431 IP 10.42.2.114.https > 10.42.0.0.47457: Flags [S.], seq 2958784889, ack 108516487, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:57.011449 IP 10.42.2.114.https > 10.42.0.0.51957: Flags [S.], seq 2380449021, ack 1115606338, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:57.011465 IP 10.42.2.114.https > 10.42.0.0.48804: Flags [S.], seq 981928638, ack 234780842, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:28:57.146470 IP 10.42.4.5.56729 > 10.42.2.99.domain: 4995+ A? longhorn-backend.svc.cluster.local. (52)
14:28:57.146560 IP 10.42.4.5.39693 > 10.42.2.99.domain: 52849+ AAAA? longhorn-backend.svc.cluster.local. (52)
14:28:57.146693 IP 10.42.2.99.domain > 10.42.4.5.39693: 52849 NXDomain*- 0/1/0 (145)
14:28:57.146752 IP 10.42.2.99.domain > 10.42.4.5.56729: 4995 NXDomain*- 0/1/0 (145)
14:28:57.163994 IP 10.42.6.88.39217 > 10.42.2.99.domain: 57011+ AAAA? redis.cfgapi.svc.cluster.local.cfgapi.svc.cluster.local. (73)
14:28:57.164373 IP 10.42.2.99.domain > 10.42.6.88.39217: 57011 NXDomain*- 0/1/0 (166)
14:28:57.452817 IP 10.42.2.119.34904 > 10.42.6.65.ismserver: Flags [S], seq 436848390, win 28200, options [mss 1410,sackOK,TS val 55080970 ecr 0,nop,wscale 7], length 0
14:28:57.736314 IP 10.42.4.7.53267 > 10.42.2.99.domain: 59442+ A? vault.svc.cluster.local. (57)
14:28:57.736342 IP 10.42.4.7.50509 > 10.42.2.99.domain: 39355+ AAAA? vault.svc.cluster.local. (57)
14:28:57.736644 IP 10.42.2.99.domain > 10.42.4.7.50509: 39355 NXDomain*- 0/1/0 (150)
14:28:57.736717 IP 10.42.2.99.domain > 10.42.4.7.53267: 59442 NXDomain*- 0/1/0 (150)

in 8.3

tcpdump -i flannel.1
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
14:36:58.227933 IP 10.42.5.6.60234 > 10.42.2.107.cslistener: Flags [S], seq 2086612380, win 28200, options [mss 1410,sackOK,TS val 1306661760 ecr 0,nop,wscale 7], length 0
14:36:58.227993 IP 10.42.2.107.cslistener > 10.42.5.6.60234: Flags [S.], seq 3228368918, ack 2086612381, win 27960, options [mss 1410,sackOK,TS val 331782353 ecr 1306646179,nop,wscale 7], length 0
14:36:58.296456 IP 10.42.0.0.52986 > 10.42.2.114.https: Flags [S], seq 1045708562, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296503 IP 10.42.0.0.33160 > 10.42.2.114.https: Flags [S], seq 2383604234, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296511 IP 10.42.0.0.19961 > 10.42.2.114.https: Flags [S], seq 1760535520, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296520 IP 10.42.0.0.46456 > 10.42.2.114.https: Flags [S], seq 1003037809, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296572 IP 10.42.2.114.https > 10.42.0.0.52986: Flags [S.], seq 3241940205, ack 1045708563, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296612 IP 10.42.2.114.https > 10.42.0.0.33160: Flags [S.], seq 2172176389, ack 2383604235, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296626 IP 10.42.2.114.https > 10.42.0.0.19961: Flags [S.], seq 3383813682, ack 1760535521, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296674 IP 10.42.2.114.https > 10.42.0.0.46456: Flags [S.], seq 1751475742, ack 1003037810, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296685 IP 10.42.0.0.dict-lookup > 10.42.2.114.https: Flags [S], seq 256576011, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.296704 IP 10.42.2.114.https > 10.42.0.0.dict-lookup: Flags [S.], seq 1596735507, ack 256576012, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.317662 IP 10.42.0.0.59564 > 10.42.2.112.websm: Flags [S], seq 2086736161, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.317715 IP 10.42.2.112.websm > 10.42.0.0.59564: Flags [S.], seq 2068366255, ack 2086736162, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.355364 IP 10.42.2.112.websm > 10.42.0.0.59566: Flags [S.], seq 657469663, ack 1576178508, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.355440 IP 10.42.2.112.websm > 10.42.0.0.59568: Flags [S.], seq 1144824371, ack 688029727, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.355454 IP 10.42.2.112.websm > 10.42.0.0.59570: Flags [S.], seq 2481659907, ack 1307510462, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.355464 IP 10.42.2.112.websm > 10.42.0.0.59572: Flags [S.], seq 1212009510, ack 3187996124, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.381638 IP 10.42.0.0.59570 > 10.42.2.112.websm: Flags [S], seq 1307510461, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.381665 IP 10.42.0.0.59566 > 10.42.2.112.websm: Flags [S], seq 1576178507, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.381669 IP 10.42.0.0.59568 > 10.42.2.112.websm: Flags [S], seq 688029726, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.381671 IP 10.42.0.0.59572 > 10.42.2.112.websm: Flags [S], seq 3187996123, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.381700 IP 10.42.2.112.websm > 10.42.0.0.59570: Flags [S.], seq 2481659907, ack 1307510462, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.381726 IP 10.42.2.112.websm > 10.42.0.0.59566: Flags [S.], seq 657469663, ack 1576178508, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.381751 IP 10.42.2.112.websm > 10.42.0.0.59568: Flags [S.], seq 1144824371, ack 688029727, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.381770 IP 10.42.2.112.websm > 10.42.0.0.59572: Flags [S.], seq 1212009510, ack 3187996124, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:58.548429 IP 10.42.2.112.38818 > 10.42.6.64.copycat: Flags [S], seq 446793383, win 28200, options [mss 1410,sackOK,TS val 3406492726 ecr 0,nop,wscale 7], length 0
14:36:58.803396 IP 10.42.2.121.42392 > 10.42.6.83.10030: Flags [S], seq 4092704332, win 28200, options [mss 1410,sackOK,TS val 3755655581 ecr 0,nop,wscale 7], length 0
14:36:59.315392 IP 10.42.2.114.https > 10.42.0.0.dict-lookup: Flags [S.], seq 1596735507, ack 256576012, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.315447 IP 10.42.2.114.https > 10.42.0.0.46456: Flags [S.], seq 1751475742, ack 1003037810, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.315459 IP 10.42.2.114.https > 10.42.0.0.19961: Flags [S.], seq 3383813682, ack 1760535521, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.315477 IP 10.42.2.114.https > 10.42.0.0.33160: Flags [S.], seq 2172176389, ack 2383604235, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.315491 IP 10.42.2.114.https > 10.42.0.0.52986: Flags [S.], seq 3241940205, ack 1045708563, win 28200, options [mss 1410,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.341666 IP 10.42.0.0.dict-lookup > 10.42.2.114.https: Flags [S], seq 256576011, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.341698 IP 10.42.0.0.19961 > 10.42.2.114.https: Flags [S], seq 1760535520, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.341701 IP 10.42.0.0.46456 > 10.42.2.114.https: Flags [S], seq 1003037809, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.341704 IP 10.42.0.0.33160 > 10.42.2.114.https: Flags [S], seq 2383604234, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
14:36:59.341707 IP 10.42.0.0.52986 > 10.42.2.114.https: Flags [S], seq 1045708562, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0

@manuelbuil
Copy link
Contributor

Could you please try execute these two commands:

1 - dig @10.43.0.10 kubernetes.default.svc.cluster.local
2 - sudo iptables-save | grep 53

In all nodes and show me the output? Thanks

@chris93111
Copy link
Author

chris93111 commented Jan 28, 2022

@manuelbuil

pods

NODE                    NAME

vldsocfg02-node    coredns-85cb69466-9l7lp
vldsocfg02-node    dashboard-kubernetes-dashboard-67f5d799bb-jc68p
vldsocfg02-node    helm-install-dashboard--1-tksvp
vldsocfg02-node    helm-install-kube-prometheus-stack--1-8jj4n
vldsocfg01-node    helm-install-traefik--1-s46gx
vldsocfg02-node    helm-install-traefik-crd--1-b4t49
vldsocfg02-node    local-path-provisioner-64ffb68fd-6clt6
vldsocfg02-node    metrics-server-9cf544f65-jjf49
vldsocfg02-front   traefik-7d484c79d5-mxd78

master 1

dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

iptables-save | grep 53
:KUBE-SEP-WGVVCPO7OWFZV533 - [0:0]
:KUBE-SEP-WGMR3X3ESA53N4LK - [0:0]
:KUBE-SEP-XFZ2OK753ABTIOFY - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -s 10.42.3.16/32 -m comment --comment "haproxy-controlplane/haproxy:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -p tcp -m comment --comment "haproxy-controlplane/haproxy:http" -m tcp -j DNAT --to-destination 10.42.3.16:80
-A KUBE-SVC-YO4TYIGBCWYI3VFG -m comment --comment "haproxy-controlplane/haproxy:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WGVVCPO7OWFZV533
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -s 10.42.4.10/32 -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -p tcp -m comment --comment "knative-serving/controller:http-profiling" -m tcp -j DNAT --to-destination 10.42.4.10:8008
-A KUBE-SVC-EI3IQDCVBHMKV2JY -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-SEP-WGMR3X3ESA53N4LK
-A KUBE-SEP-HTSZP7CY7BEFKHVJ -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-JLLHRZHLOS4X2HGL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-KIMRYYUXLHYUR34P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.198:9153
-A KUBE-SEP-XFZ2OK753ABTIOFY -s 10.42.2.184/32 -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-XFZ2OK753ABTIOFY -p tcp -m comment --comment "longhorn-system/longhorn-frontend:http" -m tcp -j DNAT --to-destination 10.42.2.184:8000
-A KUBE-SVC-ELE3EOWUPGZLKWXO -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-SEP-XFZ2OK753ABTIOFY
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -s 10.42.4.253/32 -m comment --comment "olm/packageserver-service:5443" -j KUBE-MARK-MASQ
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -p tcp -m comment --comment "olm/packageserver-service:5443" -m tcp -j DNAT --to-destination 10.42.4.253:5443
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

master 2

dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

iptables-save | grep 53
:KUBE-SEP-WGVVCPO7OWFZV533 - [0:0]
:KUBE-SEP-WGMR3X3ESA53N4LK - [0:0]
:KUBE-SEP-XFZ2OK753ABTIOFY - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SVC-YO4TYIGBCWYI3VFG -m comment --comment "haproxy-controlplane/haproxy:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WGVVCPO7OWFZV533
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -s 10.42.3.16/32 -m comment --comment "haproxy-controlplane/haproxy:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -p tcp -m comment --comment "haproxy-controlplane/haproxy:http" -m tcp -j DNAT --to-destination 10.42.3.16:80
-A KUBE-SEP-WGMR3X3ESA53N4LK -s 10.42.4.10/32 -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -p tcp -m comment --comment "knative-serving/controller:http-profiling" -m tcp -j DNAT --to-destination 10.42.4.10:8008
-A KUBE-SVC-EI3IQDCVBHMKV2JY -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-SEP-WGMR3X3ESA53N4LK
-A KUBE-SEP-KIMRYYUXLHYUR34P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.198:9153
-A KUBE-SEP-JLLHRZHLOS4X2HGL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-HTSZP7CY7BEFKHVJ -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-XFZ2OK753ABTIOFY -s 10.42.2.184/32 -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-XFZ2OK753ABTIOFY -p tcp -m comment --comment "longhorn-system/longhorn-frontend:http" -m tcp -j DNAT --to-destination 10.42.2.184:8000
-A KUBE-SVC-ELE3EOWUPGZLKWXO -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-SEP-XFZ2OK753ABTIOFY
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -s 10.42.4.253/32 -m comment --comment "olm/packageserver-service:5443" -j KUBE-MARK-MASQ
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -p tcp -m comment --comment "olm/packageserver-service:5443" -m tcp -j DNAT --to-destination 10.42.4.253:5443
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable

worker 1

dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

iptables-save | grep 53
:KUBE-SEP-WGVVCPO7OWFZV533 - [0:0]
:KUBE-SEP-WGMR3X3ESA53N4LK - [0:0]
:KUBE-SEP-XFZ2OK753ABTIOFY - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SEP-WGVVCPO7OWFZV533 -s 10.42.3.16/32 -m comment --comment "haproxy-controlplane/haproxy:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -p tcp -m comment --comment "haproxy-controlplane/haproxy:http" -m tcp -j DNAT --to-destination 10.42.3.16:80
-A KUBE-SVC-YO4TYIGBCWYI3VFG -m comment --comment "haproxy-controlplane/haproxy:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WGVVCPO7OWFZV533
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -s 10.42.4.10/32 -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -p tcp -m comment --comment "knative-serving/controller:http-profiling" -m tcp -j DNAT --to-destination 10.42.4.10:8008
-A KUBE-SVC-EI3IQDCVBHMKV2JY -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-SEP-WGMR3X3ESA53N4LK
-A KUBE-SEP-HTSZP7CY7BEFKHVJ -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-JLLHRZHLOS4X2HGL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-KIMRYYUXLHYUR34P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.198:9153
-A KUBE-SEP-XFZ2OK753ABTIOFY -s 10.42.2.184/32 -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-XFZ2OK753ABTIOFY -p tcp -m comment --comment "longhorn-system/longhorn-frontend:http" -m tcp -j DNAT --to-destination 10.42.2.184:8000
-A KUBE-SVC-ELE3EOWUPGZLKWXO -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-SEP-XFZ2OK753ABTIOFY
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -s 10.42.4.253/32 -m comment --comment "olm/packageserver-service:5443" -j KUBE-MARK-MASQ
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -p tcp -m comment --comment "olm/packageserver-service:5443" -m tcp -j DNAT --to-destination 10.42.4.253:5443
-A KUBE-ROUTER-INPUT -s 10.42.4.253/32 -m comment --comment "rule to jump traffic from POD name:packageserver-86d94b6bb4-cwvrc namespace: olm to chain KUBE-POD-FW-DIB6LBIWNATK5ER4" -j KUBE-POD-FW-DIB6LBIWNATK5ER4
-A KUBE-ROUTER-FORWARD -s 10.42.4.253/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:packageserver-86d94b6bb4-cwvrc namespace: olm to chain KUBE-POD-FW-DIB6LBIWNATK5ER4" -j KUBE-POD-FW-DIB6LBIWNATK5ER4
-A KUBE-ROUTER-FORWARD -s 10.42.4.253/32 -m comment --comment "rule to jump traffic from POD name:packageserver-86d94b6bb4-cwvrc namespace: olm to chain KUBE-POD-FW-DIB6LBIWNATK5ER4" -j KUBE-POD-FW-DIB6LBIWNATK5ER4
-A KUBE-ROUTER-FORWARD -d 10.42.4.253/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:packageserver-86d94b6bb4-cwvrc namespace: olm to chain KUBE-POD-FW-DIB6LBIWNATK5ER4" -j KUBE-POD-FW-DIB6LBIWNATK5ER4
-A KUBE-ROUTER-FORWARD -d 10.42.4.253/32 -m comment --comment "rule to jump traffic destined to POD name:packageserver-86d94b6bb4-cwvrc namespace: olm to chain KUBE-POD-FW-DIB6LBIWNATK5ER4" -j KUBE-POD-FW-DIB6LBIWNATK5ER4
-A KUBE-ROUTER-OUTPUT -s 10.42.4.253/32 -m comment --comment "rule to jump traffic from POD name:packageserver-86d94b6bb4-cwvrc namespace: olm to chain KUBE-POD-FW-DIB6LBIWNATK5ER4" -j KUBE-POD-FW-DIB6LBIWNATK5ER4
-A KUBE-ROUTER-OUTPUT -d 10.42.4.253/32 -m comment --comment "rule to jump traffic destined to POD name:packageserver-86d94b6bb4-cwvrc namespace: olm to chain KUBE-POD-FW-DIB6LBIWNATK5ER4" -j KUBE-POD-FW-DIB6LBIWNATK5ER4
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
-A KUBE-POD-FW-DIB6LBIWNATK5ER4 -d 10.42.4.253/32 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod\'s local node" -m addrtype --src-type LOCAL -j ACCEPT
-A KUBE-POD-FW-DIB6LBIWNATK5ER4 -s 10.42.4.253/32 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-DIB6LBIWNATK5ER4 -d 10.42.4.253/32 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT

worker 2

dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8537
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 2fe7f3df72372b41 (echoed)
;; QUESTION SECTION:
;kubernetes.default.svc.cluster.local. IN A

;; ANSWER SECTION:
kubernetes.default.svc.cluster.local. 5	IN A	10.43.0.1

;; Query time: 0 msec
;; SERVER: 10.43.0.10#53(10.43.0.10)
;; WHEN: Fri Jan 28 19:09:01 CET 2022
;; MSG SIZE  rcvd: 129

iptables-save | grep 53
:KUBE-SEP-WGMR3X3ESA53N4LK - [0:0]
:KUBE-SEP-WGVVCPO7OWFZV533 - [0:0]
:KUBE-SEP-XFZ2OK753ABTIOFY - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -s 10.42.4.10/32 -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -p tcp -m comment --comment "knative-serving/controller:http-profiling" -m tcp -j DNAT --to-destination 10.42.4.10:8008
-A KUBE-SVC-EI3IQDCVBHMKV2JY -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-SEP-WGMR3X3ESA53N4LK
-A KUBE-SEP-WGVVCPO7OWFZV533 -s 10.42.3.16/32 -m comment --comment "haproxy-controlplane/haproxy:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -p tcp -m comment --comment "haproxy-controlplane/haproxy:http" -m tcp -j DNAT --to-destination 10.42.3.16:80
-A KUBE-SVC-YO4TYIGBCWYI3VFG -m comment --comment "haproxy-controlplane/haproxy:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WGVVCPO7OWFZV533
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SEP-KIMRYYUXLHYUR34P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.198:9153
-A KUBE-SEP-JLLHRZHLOS4X2HGL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-HTSZP7CY7BEFKHVJ -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-XFZ2OK753ABTIOFY -s 10.42.2.184/32 -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-XFZ2OK753ABTIOFY -p tcp -m comment --comment "longhorn-system/longhorn-frontend:http" -m tcp -j DNAT --to-destination 10.42.2.184:8000
-A KUBE-SVC-ELE3EOWUPGZLKWXO -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-SEP-XFZ2OK753ABTIOFY
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -s 10.42.4.253/32 -m comment --comment "olm/packageserver-service:5443" -j KUBE-MARK-MASQ
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -p tcp -m comment --comment "olm/packageserver-service:5443" -m tcp -j DNAT --to-destination 10.42.4.253:5443
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

worker 3

dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

iptables-save | grep 53
:KUBE-SEP-WGMR3X3ESA53N4LK - [0:0]
:KUBE-SEP-WGVVCPO7OWFZV533 - [0:0]
:KUBE-SEP-XFZ2OK753ABTIOFY - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -s 10.42.4.10/32 -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -p tcp -m comment --comment "knative-serving/controller:http-profiling" -m tcp -j DNAT --to-destination 10.42.4.10:8008
-A KUBE-SVC-EI3IQDCVBHMKV2JY -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-SEP-WGMR3X3ESA53N4LK
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -s 10.42.3.16/32 -m comment --comment "haproxy-controlplane/haproxy:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -p tcp -m comment --comment "haproxy-controlplane/haproxy:http" -m tcp -j DNAT --to-destination 10.42.3.16:80
-A KUBE-SVC-YO4TYIGBCWYI3VFG -m comment --comment "haproxy-controlplane/haproxy:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WGVVCPO7OWFZV533
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SEP-KIMRYYUXLHYUR34P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.198:9153
-A KUBE-SEP-HTSZP7CY7BEFKHVJ -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-JLLHRZHLOS4X2HGL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-XFZ2OK753ABTIOFY -s 10.42.2.184/32 -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-XFZ2OK753ABTIOFY -p tcp -m comment --comment "longhorn-system/longhorn-frontend:http" -m tcp -j DNAT --to-destination 10.42.2.184:8000
-A KUBE-SVC-ELE3EOWUPGZLKWXO -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-SEP-XFZ2OK753ABTIOFY
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -s 10.42.4.253/32 -m comment --comment "olm/packageserver-service:5443" -j KUBE-MARK-MASQ
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -p tcp -m comment --comment "olm/packageserver-service:5443" -m tcp -j DNAT --to-destination 10.42.4.253:5443
-A KUBE-ROUTER-INPUT -s 10.42.6.153/32 -m comment --comment "rule to jump traffic from POD name:pki-manager-746889d789-dqsh6 namespace: pki-operator to chain KUBE-POD-FW-YMBGUBZYOJMGWU5U" -j KUBE-POD-FW-YMBGUBZYOJMGWU5U
-A KUBE-ROUTER-FORWARD -s 10.42.6.153/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:pki-manager-746889d789-dqsh6 namespace: pki-operator to chain KUBE-POD-FW-YMBGUBZYOJMGWU5U" -j KUBE-POD-FW-YMBGUBZYOJMGWU5U
-A KUBE-ROUTER-FORWARD -s 10.42.6.153/32 -m comment --comment "rule to jump traffic from POD name:pki-manager-746889d789-dqsh6 namespace: pki-operator to chain KUBE-POD-FW-YMBGUBZYOJMGWU5U" -j KUBE-POD-FW-YMBGUBZYOJMGWU5U
-A KUBE-ROUTER-FORWARD -d 10.42.6.153/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:pki-manager-746889d789-dqsh6 namespace: pki-operator to chain KUBE-POD-FW-YMBGUBZYOJMGWU5U" -j KUBE-POD-FW-YMBGUBZYOJMGWU5U
-A KUBE-ROUTER-FORWARD -d 10.42.6.153/32 -m comment --comment "rule to jump traffic destined to POD name:pki-manager-746889d789-dqsh6 namespace: pki-operator to chain KUBE-POD-FW-YMBGUBZYOJMGWU5U" -j KUBE-POD-FW-YMBGUBZYOJMGWU5U
-A KUBE-ROUTER-OUTPUT -s 10.42.6.153/32 -m comment --comment "rule to jump traffic from POD name:pki-manager-746889d789-dqsh6 namespace: pki-operator to chain KUBE-POD-FW-YMBGUBZYOJMGWU5U" -j KUBE-POD-FW-YMBGUBZYOJMGWU5U
-A KUBE-ROUTER-OUTPUT -d 10.42.6.153/32 -m comment --comment "rule to jump traffic destined to POD name:pki-manager-746889d789-dqsh6 namespace: pki-operator to chain KUBE-POD-FW-YMBGUBZYOJMGWU5U" -j KUBE-POD-FW-YMBGUBZYOJMGWU5U
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-YMBGUBZYOJMGWU5U -d 10.42.6.153/32 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod\'s local node" -m addrtype --src-type LOCAL -j ACCEPT
-A KUBE-POD-FW-YMBGUBZYOJMGWU5U -s 10.42.6.153/32 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-YMBGUBZYOJMGWU5U -d 10.42.6.153/32 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

front 1

dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

iptables-save | grep 53
:KUBE-SEP-WGVVCPO7OWFZV533 - [0:0]
:KUBE-SEP-WGMR3X3ESA53N4LK - [0:0]
:KUBE-SEP-XFZ2OK753ABTIOFY - [0:0]
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SVC-YO4TYIGBCWYI3VFG -m comment --comment "haproxy-controlplane/haproxy:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WGVVCPO7OWFZV533
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -s 10.42.3.16/32 -m comment --comment "haproxy-controlplane/haproxy:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -p tcp -m comment --comment "haproxy-controlplane/haproxy:http" -m tcp -j DNAT --to-destination 10.42.3.16:80
-A KUBE-SEP-WGMR3X3ESA53N4LK -s 10.42.4.10/32 -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -p tcp -m comment --comment "knative-serving/controller:http-profiling" -m tcp -j DNAT --to-destination 10.42.4.10:8008
-A KUBE-SVC-EI3IQDCVBHMKV2JY -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-SEP-WGMR3X3ESA53N4LK
-A KUBE-SEP-HTSZP7CY7BEFKHVJ -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-KIMRYYUXLHYUR34P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.198:9153
-A KUBE-SEP-JLLHRZHLOS4X2HGL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-XFZ2OK753ABTIOFY -s 10.42.2.184/32 -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-XFZ2OK753ABTIOFY -p tcp -m comment --comment "longhorn-system/longhorn-frontend:http" -m tcp -j DNAT --to-destination 10.42.2.184:8000
-A KUBE-SVC-ELE3EOWUPGZLKWXO -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-SEP-XFZ2OK753ABTIOFY
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -s 10.42.4.253/32 -m comment --comment "olm/packageserver-service:5443" -j KUBE-MARK-MASQ
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -p tcp -m comment --comment "olm/packageserver-service:5443" -m tcp -j DNAT --to-destination 10.42.4.253:5443
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

front 2

dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

iptables-save | grep 53
:KUBE-SEP-WGVVCPO7OWFZV533 - [0:0]
:KUBE-SEP-WGMR3X3ESA53N4LK - [0:0]
:KUBE-SEP-XFZ2OK753ABTIOFY - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -s 10.42.3.16/32 -m comment --comment "haproxy-controlplane/haproxy:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGVVCPO7OWFZV533 -p tcp -m comment --comment "haproxy-controlplane/haproxy:http" -m tcp -j DNAT --to-destination 10.42.3.16:80
-A KUBE-SVC-YO4TYIGBCWYI3VFG -m comment --comment "haproxy-controlplane/haproxy:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WGVVCPO7OWFZV533
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -s 10.42.4.10/32 -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-MARK-MASQ
-A KUBE-SEP-WGMR3X3ESA53N4LK -p tcp -m comment --comment "knative-serving/controller:http-profiling" -m tcp -j DNAT --to-destination 10.42.4.10:8008
-A KUBE-SVC-EI3IQDCVBHMKV2JY -m comment --comment "knative-serving/controller:http-profiling" -j KUBE-SEP-WGMR3X3ESA53N4LK
-A KUBE-SEP-JLLHRZHLOS4X2HGL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-KIMRYYUXLHYUR34P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.198:9153
-A KUBE-SEP-HTSZP7CY7BEFKHVJ -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.198:53
-A KUBE-SEP-XFZ2OK753ABTIOFY -s 10.42.2.184/32 -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-XFZ2OK753ABTIOFY -p tcp -m comment --comment "longhorn-system/longhorn-frontend:http" -m tcp -j DNAT --to-destination 10.42.2.184:8000
-A KUBE-SVC-ELE3EOWUPGZLKWXO -m comment --comment "longhorn-system/longhorn-frontend:http" -j KUBE-SEP-XFZ2OK753ABTIOFY
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -s 10.42.4.253/32 -m comment --comment "olm/packageserver-service:5443" -j KUBE-MARK-MASQ
-A KUBE-SEP-MFOVYHP43IA5P7Y7 -p tcp -m comment --comment "olm/packageserver-service:5443" -m tcp -j DNAT --to-destination 10.42.4.253:5443
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

@manuelbuil
Copy link
Contributor

Thanks, so it works in worker 2. Can you confirm that coredns pod is running in that host?

@manuelbuil
Copy link
Contributor

Can you please show me the output of kubectl get pods -A -o wide? Thanks

@chris93111
Copy link
Author

chris93111 commented Jan 31, 2022

@manuelbuil yes is run in worker 2 see


vldsocfg02-node    coredns-85cb69466-9l7lp
vldsocfg02-node    dashboard-kubernetes-dashboard-67f5d799bb-jc68p
vldsocfg02-node    helm-install-dashboard--1-tksvp
vldsocfg02-node    helm-install-kube-prometheus-stack--1-8jj4n
vldsocfg01-node    helm-install-traefik--1-s46gx
vldsocfg02-node    helm-install-traefik-crd--1-b4t49
vldsocfg02-node    local-path-provisioner-64ffb68fd-6clt6
vldsocfg02-node    metrics-server-9cf544f65-jjf49
vldsocfg02-front   traefik-7d484c79d5-mxd78

@manuelbuil
Copy link
Contributor

Can you try to ping coredns from other nodes?

@chris93111
Copy link
Author

@manuelbuil

Master 1

kubectl get pods -o wide -A | grep coredns
kube-system            coredns-85cb69466-9l7lp                                           1/1     Running             8 (4m32s ago)     5d5h    10.42.2.95    vldsocfg02-node     <none>           <none>
[root@vldsocfg01-master~]# ping 10.42.2.95
PING 10.42.2.95 (10.42.2.95) 56(84) bytes of data.
64 bytes from 10.42.2.95: icmp_seq=1 ttl=63 time=0.316 ms
64 bytes from 10.42.2.95: icmp_seq=2 ttl=63 time=0.264 ms
64 bytes from 10.42.2.95: icmp_seq=3 ttl=63 time=0.279 ms

Node front 1

[root@vldsocfg01-front~]# ping 10.42.2.95
PING 10.42.2.95 (10.42.2.95) 56(84) bytes of data.
64 bytes from 10.42.2.95: icmp_seq=1 ttl=63 time=0.442 ms
64 bytes from 10.42.2.95: icmp_seq=2 ttl=63 time=0.314 ms
64 bytes from 10.42.2.95: icmp_seq=3 ttl=63 time=0.383 ms

worker 1

[rancher@vldsocfg01-node ~]$ ping 10.42.2.95
PING 10.42.2.95 (10.42.2.95) 56(84) bytes of data.
64 bytes from 10.42.2.95: icmp_seq=1 ttl=63 time=0.421 ms
64 bytes from 10.42.2.95: icmp_seq=2 ttl=63 time=0.290 ms
64 bytes from 10.42.2.95: icmp_seq=3 ttl=63 time=0.283 ms

from pod running on worker 3

bash-4.4# printenv | grep POD_IP
MY_POD_IP=10.42.6.2
bash-4.4# ping 10.42.2.95
PING 10.42.2.95 (10.42.2.95) 56(84) bytes of data.
64 bytes from 10.42.2.95: icmp_seq=1 ttl=62 time=0.511 ms
64 bytes from 10.42.2.95: icmp_seq=2 ttl=62 time=0.414 ms
64 bytes from 10.42.2.95: icmp_seq=3 ttl=62 time=0.441 ms

@manuelbuil
Copy link
Contributor

According to the iptables rules you showed me the coredns IP is 10.42.2.198 but now you ping to 10.42.2.95. Did you redeploy?

@manuelbuil
Copy link
Contributor

Could you run the following:

  • In worker2 sudo tcpdump -nni flannel.1 port 53
  • In worker1 dig @10.43.0.10 kubernetes.default.svc.cluster.local

And show me the output?

@manuelbuil
Copy link
Contributor

And also sudo nft list tables in worker1 and worker2

@chris93111
Copy link
Author

chris93111 commented Jan 31, 2022

@manuelbuil yes restarted nodes

root@vldsocfg01-master ~]# kubectl get pods -A -o wide | grep coredns
kube-system            coredns-85cb69466-9l7lp                                           1/1     Running             8 (93m ago)      5d6h    10.42.2.95    vldsocfg02-node    <none>           <none>
[root@vldsocfg01-master ~]# 


[root@vldsocfg02-node ~]# sudo nft list tables
table ip6 nat
table ip6 mangle
table ip mangle
table ip nat
table ip6 filter
table ip filter

[root@vldsocfg01-node ~]# sudo nft list tables
table ip6 nat
table ip mangle
table ip6 mangle
table ip nat
table ip6 filter
table ip filter

[root@vldsocfg02-node ~]# sudo tcpdump -nni flannel.1 port 53
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes

[root@vldsocfg01-node ~]# dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

[root@vldsocfg01-node ~]# iptables-save | grep 53
:KUBE-SVC-TDKCWCDYOLDGLE53 - [0:0]
:KUBE-SEP-TA2NLCBV3Z53SZ6Y - [0:0]
:KUBE-SEP-G4PXKZH4GZ53CPYE - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.205.253/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-SVC-Y3OVZYCKHGYTKGDA
-A KUBE-SERVICES -d 10.43.56.108/32 -p tcp -m comment --comment "harbor/harbor-chartmuseum cluster IP" -m tcp --dport 80 -j KUBE-SVC-TDKCWCDYOLDGLE53
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SERVICES -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-SVC-F2PLWJBFKF34IUKL
-A KUBE-SERVICES -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-profiling cluster IP" -m tcp --dport 8008 -j KUBE-SVC-EI3IQDCVBHMKV2JY
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-NQL64TY2T3FWFXFQ -m comment --comment "longhorn-system/longhorn-backend:manager" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-G4PXKZH4GZ53CPYE --mask 255.255.255.255 --rsource -j KUBE-SEP-G4PXKZH4GZ53CPYE
-A KUBE-SVC-NQL64TY2T3FWFXFQ -m comment --comment "longhorn-system/longhorn-backend:manager" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-G4PXKZH4GZ53CPYE
-A KUBE-SVC-EI3IQDCVBHMKV2JY ! -s 10.42.0.0/16 -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-profiling cluster IP" -m tcp --dport 8008 -j KUBE-MARK-MASQ
-A KUBE-SVC-F2PLWJBFKF34IUKL ! -s 10.42.0.0/16 -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TDKCWCDYOLDGLE53 ! -s 10.42.0.0/16 -d 10.43.56.108/32 -p tcp -m comment --comment "harbor/harbor-chartmuseum cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-TDKCWCDYOLDGLE53 -m comment --comment "harbor/harbor-chartmuseum" -j KUBE-SEP-QLHXWWHMRNJBVA6N
-A KUBE-SVC-Y3OVZYCKHGYTKGDA ! -s 10.42.0.0/16 -d 10.43.205.253/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SEP-BVOMFPSF5J7XRKAH -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.95:53
-A KUBE-SEP-4B7P6DM7JTAE26WF -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.95:53
-A KUBE-SEP-VW3FVVPCCZ46AS3F -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.95:9153
-A KUBE-SEP-TA2NLCBV3Z53SZ6Y -s 10.42.4.34/32 -m comment --comment "knative-serving/autoscaler:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-TA2NLCBV3Z53SZ6Y -p tcp -m comment --comment "knative-serving/autoscaler:http" -m tcp -j DNAT --to-destination 10.42.4.34:8080
-A KUBE-SVC-YIB7IH74HCZQXJ5A -m comment --comment "knative-serving/autoscaler:http" -j KUBE-SEP-TA2NLCBV3Z53SZ6Y
-A KUBE-SEP-G4PXKZH4GZ53CPYE -s 10.42.2.99/32 -m comment --comment "longhorn-system/longhorn-backend:manager" -j KUBE-MARK-MASQ
-A KUBE-SEP-G4PXKZH4GZ53CPYE -p tcp -m comment --comment "longhorn-system/longhorn-backend:manager" -m recent --set --name KUBE-SEP-G4PXKZH4GZ53CPYE --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.42.2.99:9500
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-5AQIVT2EWGWN4DWD -s 10.42.6.253/32 -m comment --comment "longhorn-system/csi-resizer:dummy" -j KUBE-MARK-MASQ
-A KUBE-SEP-5AQIVT2EWGWN4DWD -p tcp -m comment --comment "longhorn-system/csi-resizer:dummy" -m tcp -j DNAT --to-destination 10.42.6.253:12345
:KUBE-POD-FW-IJ44YPAKZ533OJ6E - [0:0]
-A KUBE-ROUTER-INPUT -s 10.42.4.26/32 -m comment --comment "rule to jump traffic from POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-FORWARD -s 10.42.4.26/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-FORWARD -s 10.42.4.26/32 -m comment --comment "rule to jump traffic from POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-FORWARD -d 10.42.4.26/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-FORWARD -d 10.42.4.26/32 -m comment --comment "rule to jump traffic destined to POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-OUTPUT -s 10.42.4.26/32 -m comment --comment "rule to jump traffic from POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-OUTPUT -d 10.42.4.26/32 -m comment --comment "rule to jump traffic destined to POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -d 10.42.4.26/32 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod\'s local node" -m addrtype --src-type LOCAL -j ACCEPT
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -s 10.42.4.26/32 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -d 10.42.4.26/32 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -m comment --comment "rule to log dropped traffic POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system" -m mark ! --mark 0x10000/0x10000 -m limit --limit 10/min --limit-burst 10 -j NFLOG --nflog-group 100
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -m comment --comment "rule to REJECT traffic destined for POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system" -m mark ! --mark 0x10000/0x10000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -j MARK --set-xmark 0x0/0x10000
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-xmark 0x20000/0x20000
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them


[root@vldsocfg02-node ~]# iptables-save | grep 53
:KUBE-SVC-TDKCWCDYOLDGLE53 - [0:0]
:KUBE-SEP-TA2NLCBV3Z53SZ6Y - [0:0]
:KUBE-SEP-G4PXKZH4GZ53CPYE - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SERVICES -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-SVC-F2PLWJBFKF34IUKL
-A KUBE-SERVICES -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-profiling cluster IP" -m tcp --dport 8008 -j KUBE-SVC-EI3IQDCVBHMKV2JY
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.56.108/32 -p tcp -m comment --comment "harbor/harbor-chartmuseum cluster IP" -m tcp --dport 80 -j KUBE-SVC-TDKCWCDYOLDGLE53
-A KUBE-SERVICES -d 10.43.205.253/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-SVC-Y3OVZYCKHGYTKGDA
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SVC-F2PLWJBFKF34IUKL ! -s 10.42.0.0/16 -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-MARK-MASQ
-A KUBE-SVC-EI3IQDCVBHMKV2JY ! -s 10.42.0.0/16 -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-profiling cluster IP" -m tcp --dport 8008 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-NQL64TY2T3FWFXFQ -m comment --comment "longhorn-system/longhorn-backend:manager" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-G4PXKZH4GZ53CPYE --mask 255.255.255.255 --rsource -j KUBE-SEP-G4PXKZH4GZ53CPYE
-A KUBE-SVC-NQL64TY2T3FWFXFQ -m comment --comment "longhorn-system/longhorn-backend:manager" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-G4PXKZH4GZ53CPYE
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TDKCWCDYOLDGLE53 ! -s 10.42.0.0/16 -d 10.43.56.108/32 -p tcp -m comment --comment "harbor/harbor-chartmuseum cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-TDKCWCDYOLDGLE53 -m comment --comment "harbor/harbor-chartmuseum" -j KUBE-SEP-QLHXWWHMRNJBVA6N
-A KUBE-SVC-Y3OVZYCKHGYTKGDA ! -s 10.42.0.0/16 -d 10.43.205.253/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SEP-VW3FVVPCCZ46AS3F -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.95:9153
-A KUBE-SEP-BVOMFPSF5J7XRKAH -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.95:53
-A KUBE-SEP-4B7P6DM7JTAE26WF -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.95:53
-A KUBE-SEP-TA2NLCBV3Z53SZ6Y -s 10.42.4.34/32 -m comment --comment "knative-serving/autoscaler:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-TA2NLCBV3Z53SZ6Y -p tcp -m comment --comment "knative-serving/autoscaler:http" -m tcp -j DNAT --to-destination 10.42.4.34:8080
-A KUBE-SVC-YIB7IH74HCZQXJ5A -m comment --comment "knative-serving/autoscaler:http" -j KUBE-SEP-TA2NLCBV3Z53SZ6Y
-A KUBE-SEP-G4PXKZH4GZ53CPYE -s 10.42.2.99/32 -m comment --comment "longhorn-system/longhorn-backend:manager" -j KUBE-MARK-MASQ
-A KUBE-SEP-G4PXKZH4GZ53CPYE -p tcp -m comment --comment "longhorn-system/longhorn-backend:manager" -m recent --set --name KUBE-SEP-G4PXKZH4GZ53CPYE --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.42.2.99:9500
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-5AQIVT2EWGWN4DWD -s 10.42.6.253/32 -m comment --comment "longhorn-system/csi-resizer:dummy" -j KUBE-MARK-MASQ
-A KUBE-SEP-5AQIVT2EWGWN4DWD -p tcp -m comment --comment "longhorn-system/csi-resizer:dummy" -m tcp -j DNAT --to-destination 10.42.6.253:12345
:KUBE-POD-FW-6KLYAEZFSGSPR53F - [0:0]
-A KUBE-ROUTER-INPUT -s 10.42.2.109/32 -m comment --comment "rule to jump traffic from POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-FORWARD -s 10.42.2.109/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-FORWARD -s 10.42.2.109/32 -m comment --comment "rule to jump traffic from POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-FORWARD -d 10.42.2.109/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-FORWARD -d 10.42.2.109/32 -m comment --comment "rule to jump traffic destined to POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-OUTPUT -s 10.42.2.109/32 -m comment --comment "rule to jump traffic from POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-OUTPUT -d 10.42.2.109/32 -m comment --comment "rule to jump traffic destined to POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -d 10.42.2.109/32 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod\'s local node" -m addrtype --src-type LOCAL -j ACCEPT
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -s 10.42.2.109/32 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -d 10.42.2.109/32 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -m comment --comment "rule to log dropped traffic POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving" -m mark ! --mark 0x10000/0x10000 -m limit --limit 10/min --limit-burst 10 -j NFLOG --nflog-group 100
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -m comment --comment "rule to REJECT traffic destined for POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving" -m mark ! --mark 0x10000/0x10000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -j MARK --set-xmark 0x0/0x10000
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-xmark 0x20000/0x20000

we see clearly there is no communication

@manuelbuil
Copy link
Contributor

manuelbuil commented Jan 31, 2022

@manuelbuil yes restarted nodes

root@vldsocfg01-master ~]# kubectl get pods -A -o wide | grep coredns
kube-system            coredns-85cb69466-9l7lp                                           1/1     Running             8 (93m ago)      5d6h    10.42.2.95    vldsocfg02-node    <none>           <none>
[root@vldsocfg01-master ~]# 


[root@vldsocfg02-node ~]# sudo nft list tables
table ip6 nat
table ip6 mangle
table ip mangle
table ip nat
table ip6 filter
table ip filter

[root@vldsocfg01-node ~]# sudo nft list tables
table ip6 nat
table ip mangle
table ip6 mangle
table ip nat
table ip6 filter
table ip filter

[root@vldsocfg02-node ~]# sudo tcpdump -nni flannel.1 port 53
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes

[root@vldsocfg01-node ~]# dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

[root@vldsocfg01-node ~]# iptables-save | grep 53
:KUBE-SVC-TDKCWCDYOLDGLE53 - [0:0]
:KUBE-SEP-TA2NLCBV3Z53SZ6Y - [0:0]
:KUBE-SEP-G4PXKZH4GZ53CPYE - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.205.253/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-SVC-Y3OVZYCKHGYTKGDA
-A KUBE-SERVICES -d 10.43.56.108/32 -p tcp -m comment --comment "harbor/harbor-chartmuseum cluster IP" -m tcp --dport 80 -j KUBE-SVC-TDKCWCDYOLDGLE53
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SERVICES -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-SVC-F2PLWJBFKF34IUKL
-A KUBE-SERVICES -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-profiling cluster IP" -m tcp --dport 8008 -j KUBE-SVC-EI3IQDCVBHMKV2JY
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-NQL64TY2T3FWFXFQ -m comment --comment "longhorn-system/longhorn-backend:manager" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-G4PXKZH4GZ53CPYE --mask 255.255.255.255 --rsource -j KUBE-SEP-G4PXKZH4GZ53CPYE
-A KUBE-SVC-NQL64TY2T3FWFXFQ -m comment --comment "longhorn-system/longhorn-backend:manager" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-G4PXKZH4GZ53CPYE
-A KUBE-SVC-EI3IQDCVBHMKV2JY ! -s 10.42.0.0/16 -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-profiling cluster IP" -m tcp --dport 8008 -j KUBE-MARK-MASQ
-A KUBE-SVC-F2PLWJBFKF34IUKL ! -s 10.42.0.0/16 -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TDKCWCDYOLDGLE53 ! -s 10.42.0.0/16 -d 10.43.56.108/32 -p tcp -m comment --comment "harbor/harbor-chartmuseum cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-TDKCWCDYOLDGLE53 -m comment --comment "harbor/harbor-chartmuseum" -j KUBE-SEP-QLHXWWHMRNJBVA6N
-A KUBE-SVC-Y3OVZYCKHGYTKGDA ! -s 10.42.0.0/16 -d 10.43.205.253/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SEP-BVOMFPSF5J7XRKAH -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.95:53
-A KUBE-SEP-4B7P6DM7JTAE26WF -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.95:53
-A KUBE-SEP-VW3FVVPCCZ46AS3F -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.95:9153
-A KUBE-SEP-TA2NLCBV3Z53SZ6Y -s 10.42.4.34/32 -m comment --comment "knative-serving/autoscaler:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-TA2NLCBV3Z53SZ6Y -p tcp -m comment --comment "knative-serving/autoscaler:http" -m tcp -j DNAT --to-destination 10.42.4.34:8080
-A KUBE-SVC-YIB7IH74HCZQXJ5A -m comment --comment "knative-serving/autoscaler:http" -j KUBE-SEP-TA2NLCBV3Z53SZ6Y
-A KUBE-SEP-G4PXKZH4GZ53CPYE -s 10.42.2.99/32 -m comment --comment "longhorn-system/longhorn-backend:manager" -j KUBE-MARK-MASQ
-A KUBE-SEP-G4PXKZH4GZ53CPYE -p tcp -m comment --comment "longhorn-system/longhorn-backend:manager" -m recent --set --name KUBE-SEP-G4PXKZH4GZ53CPYE --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.42.2.99:9500
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-5AQIVT2EWGWN4DWD -s 10.42.6.253/32 -m comment --comment "longhorn-system/csi-resizer:dummy" -j KUBE-MARK-MASQ
-A KUBE-SEP-5AQIVT2EWGWN4DWD -p tcp -m comment --comment "longhorn-system/csi-resizer:dummy" -m tcp -j DNAT --to-destination 10.42.6.253:12345
:KUBE-POD-FW-IJ44YPAKZ533OJ6E - [0:0]
-A KUBE-ROUTER-INPUT -s 10.42.4.26/32 -m comment --comment "rule to jump traffic from POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-FORWARD -s 10.42.4.26/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-FORWARD -s 10.42.4.26/32 -m comment --comment "rule to jump traffic from POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-FORWARD -d 10.42.4.26/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-FORWARD -d 10.42.4.26/32 -m comment --comment "rule to jump traffic destined to POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-OUTPUT -s 10.42.4.26/32 -m comment --comment "rule to jump traffic from POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-ROUTER-OUTPUT -d 10.42.4.26/32 -m comment --comment "rule to jump traffic destined to POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system to chain KUBE-POD-FW-IJ44YPAKZ533OJ6E" -j KUBE-POD-FW-IJ44YPAKZ533OJ6E
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -d 10.42.4.26/32 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod\'s local node" -m addrtype --src-type LOCAL -j ACCEPT
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -s 10.42.4.26/32 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -d 10.42.4.26/32 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -m comment --comment "rule to log dropped traffic POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system" -m mark ! --mark 0x10000/0x10000 -m limit --limit 10/min --limit-burst 10 -j NFLOG --nflog-group 100
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -m comment --comment "rule to REJECT traffic destined for POD name:longhorn-csi-plugin-s8jn7 namespace: longhorn-system" -m mark ! --mark 0x10000/0x10000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -j MARK --set-xmark 0x0/0x10000
-A KUBE-POD-FW-IJ44YPAKZ533OJ6E -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-xmark 0x20000/0x20000
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them


[root@vldsocfg02-node ~]# iptables-save | grep 53
:KUBE-SVC-TDKCWCDYOLDGLE53 - [0:0]
:KUBE-SEP-TA2NLCBV3Z53SZ6Y - [0:0]
:KUBE-SEP-G4PXKZH4GZ53CPYE - [0:0]
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-SVC-U5YXGQUDWU5PP6YA
-A KUBE-SERVICES -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-SVC-F2PLWJBFKF34IUKL
-A KUBE-SERVICES -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-profiling cluster IP" -m tcp --dport 8008 -j KUBE-SVC-EI3IQDCVBHMKV2JY
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.43.56.108/32 -p tcp -m comment --comment "harbor/harbor-chartmuseum cluster IP" -m tcp --dport 80 -j KUBE-SVC-TDKCWCDYOLDGLE53
-A KUBE-SERVICES -d 10.43.205.253/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-SVC-Y3OVZYCKHGYTKGDA
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SVC-F2PLWJBFKF34IUKL ! -s 10.42.0.0/16 -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-MARK-MASQ
-A KUBE-SVC-EI3IQDCVBHMKV2JY ! -s 10.42.0.0/16 -d 10.43.117.53/32 -p tcp -m comment --comment "knative-serving/controller:http-profiling cluster IP" -m tcp --dport 8008 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-NQL64TY2T3FWFXFQ -m comment --comment "longhorn-system/longhorn-backend:manager" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-G4PXKZH4GZ53CPYE --mask 255.255.255.255 --rsource -j KUBE-SEP-G4PXKZH4GZ53CPYE
-A KUBE-SVC-NQL64TY2T3FWFXFQ -m comment --comment "longhorn-system/longhorn-backend:manager" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-G4PXKZH4GZ53CPYE
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TDKCWCDYOLDGLE53 ! -s 10.42.0.0/16 -d 10.43.56.108/32 -p tcp -m comment --comment "harbor/harbor-chartmuseum cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-TDKCWCDYOLDGLE53 -m comment --comment "harbor/harbor-chartmuseum" -j KUBE-SEP-QLHXWWHMRNJBVA6N
-A KUBE-SVC-Y3OVZYCKHGYTKGDA ! -s 10.42.0.0/16 -d 10.43.205.253/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SEP-VW3FVVPCCZ46AS3F -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.2.95:9153
-A KUBE-SEP-BVOMFPSF5J7XRKAH -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.2.95:53
-A KUBE-SEP-4B7P6DM7JTAE26WF -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.2.95:53
-A KUBE-SEP-TA2NLCBV3Z53SZ6Y -s 10.42.4.34/32 -m comment --comment "knative-serving/autoscaler:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-TA2NLCBV3Z53SZ6Y -p tcp -m comment --comment "knative-serving/autoscaler:http" -m tcp -j DNAT --to-destination 10.42.4.34:8080
-A KUBE-SVC-YIB7IH74HCZQXJ5A -m comment --comment "knative-serving/autoscaler:http" -j KUBE-SEP-TA2NLCBV3Z53SZ6Y
-A KUBE-SEP-G4PXKZH4GZ53CPYE -s 10.42.2.99/32 -m comment --comment "longhorn-system/longhorn-backend:manager" -j KUBE-MARK-MASQ
-A KUBE-SEP-G4PXKZH4GZ53CPYE -p tcp -m comment --comment "longhorn-system/longhorn-backend:manager" -m recent --set --name KUBE-SEP-G4PXKZH4GZ53CPYE --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.42.2.99:9500
-A KUBE-SVC-U5YXGQUDWU5PP6YA ! -s 10.42.0.0/16 -d 10.43.253.151/32 -p tcp -m comment --comment "openshift-console/console:http cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SEP-5AQIVT2EWGWN4DWD -s 10.42.6.253/32 -m comment --comment "longhorn-system/csi-resizer:dummy" -j KUBE-MARK-MASQ
-A KUBE-SEP-5AQIVT2EWGWN4DWD -p tcp -m comment --comment "longhorn-system/csi-resizer:dummy" -m tcp -j DNAT --to-destination 10.42.6.253:12345
:KUBE-POD-FW-6KLYAEZFSGSPR53F - [0:0]
-A KUBE-ROUTER-INPUT -s 10.42.2.109/32 -m comment --comment "rule to jump traffic from POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-FORWARD -s 10.42.2.109/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-FORWARD -s 10.42.2.109/32 -m comment --comment "rule to jump traffic from POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-FORWARD -d 10.42.2.109/32 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-FORWARD -d 10.42.2.109/32 -m comment --comment "rule to jump traffic destined to POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-OUTPUT -s 10.42.2.109/32 -m comment --comment "rule to jump traffic from POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-ROUTER-OUTPUT -d 10.42.2.109/32 -m comment --comment "rule to jump traffic destined to POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving to chain KUBE-POD-FW-6KLYAEZFSGSPR53F" -j KUBE-POD-FW-6KLYAEZFSGSPR53F
-A KUBE-SERVICES -d 10.43.253.254/32 -p tcp -m comment --comment "minio/minio:http has no endpoints" -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -d 10.42.2.109/32 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod\'s local node" -m addrtype --src-type LOCAL -j ACCEPT
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -s 10.42.2.109/32 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -d 10.42.2.109/32 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -m comment --comment "rule to log dropped traffic POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving" -m mark ! --mark 0x10000/0x10000 -m limit --limit 10/min --limit-burst 10 -j NFLOG --nflog-group 100
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -m comment --comment "rule to REJECT traffic destined for POD name:webhook-6d5c77f989-c9xxg namespace: knative-serving" -m mark ! --mark 0x10000/0x10000 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -j MARK --set-xmark 0x0/0x10000
-A KUBE-POD-FW-6KLYAEZFSGSPR53F -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-xmark 0x20000/0x20000

we see clearly there is no communication

Thanks. There is something strange going on. Can you try again the dig command but now:

  • In worker2 sudo tcpdump -vvvnni eth0 port 8472
  • In worker1 sudo tcpdump -vvvnni eth0 port 8472
  • In worker1 (another terminal) dig @10.43.0.10 kubernetes.default.svc.cluster.local

@chris93111
Copy link
Author

chris93111 commented Jan 31, 2022

@manuelbuil ok sorry is verbose


[root@vldsocfg01-node ~]# dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

Node 1

[root@vldsocfg01-node ~]# sudo tcpdump -vvvnni eth0 port 8472
dropped privs to tcpdump
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
18:31:36.311155 IP (tos 0x0, ttl 64, id 8456, offset 0, flags [none], proto UDP (17), length 129)
    x.y.6.8.52031 > x.y.6.13.8472: [bad udp cksum 0xdcaf -> 0x4279!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 2919, offset 0, flags [DF], proto UDP (17), length 79)
    10.42.4.29.41188 > 10.42.2.95.53: [bad udp cksum 0x1b1c -> 0x80e5!] 5078+ AAAA? harbor-database.svc.cluster.local. (51)
18:31:36.311169 IP (tos 0x0, ttl 64, id 8457, offset 0, flags [none], proto UDP (17), length 129)
    x.y.6.8.58877 > x.y.6.13.8472: [bad udp cksum 0xc1f1 -> 0x062a!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 2920, offset 0, flags [DF], proto UDP (17), length 79)
    10.42.4.29.60542 > 10.42.2.95.53: [bad udp cksum 0x1b1c -> 0x5f54!] 1229+ A? harbor-database.svc.cluster.local. (51)
18:31:36.381710 IP (tos 0x0, ttl 64, id 48428, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.37273 > x.y.6.15.8472: [bad udp cksum 0xb9c3 -> 0xa1db!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 50958, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.50200 > 10.42.6.244.8080: Flags [S], cksum 0x1fa2 (incorrect -> 0x07ba), seq 3387187564, win 28200, options [mss 1410,sackOK,TS val 3976115939 ecr 0,nop,wscale 7], length 0
18:31:37.066386 IP (tos 0x0, ttl 64, id 9183, offset 0, flags [none], proto UDP (17), length 155)
    x.y.6.8.46598 > x.y.6.13.8472: [bad udp cksum 0xf1b1 -> 0x521c!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 64, id 21136, offset 0, flags [none], proto UDP (17), length 105)
    10.42.4.0.40919 > 10.42.2.95.53: [bad udp cksum 0x1b19 -> 0x7b83!] 26647+ [1au] A? kubernetes.default.svc.cluster.local. ar: . OPT UDPsize=4096 (77)
18:31:37.341711 IP (tos 0x0, ttl 64, id 46123, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.35573 > x.y.6.8.8472: [bad udp cksum 0x26aa -> 0xeb79!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 27337, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.44468 > 10.42.3.28.7472: Flags [S], cksum 0x1bca (incorrect -> 0xe099), seq 2211950610, win 28200, options [mss 1410,sackOK,TS val 2968595988 ecr 0,nop,wscale 7], length 0
18:31:37.725810 IP (tos 0x0, ttl 64, id 9477, offset 0, flags [none], proto UDP (17), length 156)
    x.y.6.8.41016 > x.y.6.13.8472: [bad udp cksum 0x07a6 -> 0x8744!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 32098, offset 0, flags [DF], proto UDP (17), length 106)
    10.42.4.39.44237 > 10.42.2.95.53: [bad udp cksum 0x1b41 -> 0x9adf!] 32945+ A? elasticsearch-master-headless.kube-logging.svc.cluster.local. (78)
18:31:37.725835 IP (tos 0x0, ttl 64, id 9478, offset 0, flags [none], proto UDP (17), length 156)
    x.y.6.8.41016 > x.y.6.13.8472: [bad udp cksum 0x07a6 -> 0xef1d!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 32099, offset 0, flags [DF], proto UDP (17), length 106)
    10.42.4.39.44237 > 10.42.2.95.53: [bad udp cksum 0x1b41 -> 0x02b9!] 6333+ AAAA? elasticsearch-master-headless.kube-logging.svc.cluster.local. (78)
18:31:37.905394 IP (tos 0x0, ttl 64, id 9584, offset 0, flags [none], proto UDP (17), length 132)
    x.y.6.8.34583 > x.y.6.13.8472: [bad udp cksum 0x20e2 -> 0x7add!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 39869, offset 0, flags [DF], proto UDP (17), length 82)
    10.42.4.42.53737 > 10.42.2.95.53: [bad udp cksum 0x1b2c -> 0x7527!] 61107+ A? argocd-catalog.olm.svc.cluster.local. (54)
18:31:37.905407 IP (tos 0x0, ttl 64, id 9585, offset 0, flags [none], proto UDP (17), length 132)
    x.y.6.8.53817 > x.y.6.13.8472: [bad udp cksum 0xd5bf -> 0xcdcf!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 37426, offset 0, flags [DF], proto UDP (17), length 82)
    10.42.4.42.41109 > 10.42.2.95.53: [bad udp cksum 0x1b2c -> 0x133c!] 33240+ AAAA? argocd-catalog.olm.svc.cluster.local. (54)
18:31:37.981764 IP (tos 0x0, ttl 64, id 9601, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.37763 > x.y.6.13.8472: [bad udp cksum 0x1483 -> 0x2581!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 56723, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.60048 > 10.42.2.95.9153: Flags [S], cksum 0x1b0d (incorrect -> 0x2c0b), seq 3045481614, win 28200, options [mss 1410,sackOK,TS val 1410604709 ecr 0,nop,wscale 7], length 0
18:31:38.429748 IP (tos 0x0, ttl 64, id 9687, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.42111 > x.y.6.13.8472: [bad udp cksum 0x0395 -> 0xd856!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 20280, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.20.51392 > 10.42.2.133.10045: Flags [S], cksum 0x1b1b (incorrect -> 0xefdc), seq 570360647, win 28200, options [mss 1410,sackOK,TS val 3752319572 ecr 0,nop,wscale 7], length 0
18:31:38.877753 IP (tos 0x0, ttl 64, id 10059, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.36418 > x.y.6.13.8472: [bad udp cksum 0x19d2 -> 0xa47b!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 56152, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.20.52060 > 10.42.2.133.10030: Flags [S], cksum 0x1b1b (incorrect -> 0xa5c4), seq 982134919, win 28200, options [mss 1410,sackOK,TS val 3752320020 ecr 0,nop,wscale 7], length 0
18:31:38.996605 IP (tos 0x0, ttl 64, id 10101, offset 0, flags [none], proto UDP (17), length 149)
    x.y.6.8.59022 > x.y.6.13.8472: [bad udp cksum 0xc15b -> 0x7091!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 19713, offset 0, flags [DF], proto UDP (17), length 99)
    10.42.4.44.58403 > 10.42.2.95.53: [bad udp cksum 0x1b3f -> 0xca74!] 7072+ AAAA? kubernetes.default.svc.istio-system.svc.cluster.local. (71)
18:31:38.996605 IP (tos 0x0, ttl 64, id 10060, offset 0, flags [none], proto UDP (17), length 149)
    x.y.6.8.58011 > x.y.6.13.8472: [bad udp cksum 0xc54e -> 0xff3e!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 19712, offset 0, flags [DF], proto UDP (17), length 99)
    10.42.4.44.55711 > 10.42.2.95.53: [bad udp cksum 0x1b3f -> 0x552f!] 46697+ A? kubernetes.default.svc.istio-system.svc.cluster.local. (71)
18:31:39.039568 IP (tos 0x0, ttl 64, id 10116, offset 0, flags [none], proto UDP (17), length 146)
    x.y.6.8.34213 > x.y.6.13.8472: [bad udp cksum 0x2237 -> 0x3282!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 35556, offset 0, flags [DF], proto UDP (17), length 96)
    10.42.4.27.43983 > 10.42.2.95.53: [bad udp cksum 0x1b2b -> 0x2b76!] 43071+ A? longhorn-backend.longhorn-system.svc.cluster.local. (68)
18:31:39.039615 IP (tos 0x0, ttl 64, id 10117, offset 0, flags [none], proto UDP (17), length 146)
    x.y.6.8.34213 > x.y.6.13.8472: [bad udp cksum 0x2237 -> 0xfce1!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 35557, offset 0, flags [DF], proto UDP (17), length 96)
    10.42.4.27.43983 > 10.42.2.95.53: [bad udp cksum 0x1b2b -> 0xf5d5!] 56772+ AAAA? longhorn-backend.longhorn-system.svc.cluster.local. (68)
18:31:39.389769 IP (tos 0x0, ttl 64, id 10162, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.54410 > x.y.6.13.8472: [bad udp cksum 0xd389 -> 0x7234!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 42841, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.20.39994 > 10.42.2.133.10075: Flags [S], cksum 0x1b1b (incorrect -> 0xb9c5), seq 3258499532, win 28200, options [mss 1410,sackOK,TS val 3752320532 ecr 0,nop,wscale 7], length 0
18:31:39.573969 IP (tos 0x0, ttl 64, id 10259, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.37539 > x.y.6.13.8472: [bad udp cksum 0x1538 -> 0x1b5f!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 27966, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.38685 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0x2169!] 42876+ AAAA? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:40.341800 IP (tos 0x0, ttl 64, id 46361, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.55112 > x.y.6.8.8472: [bad udp cksum 0xda54 -> 0x0164!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 58499, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.38826 > 10.42.3.26.15020: Flags [S], cksum 0x1bc8 (incorrect -> 0x42d7), seq 1052086688, win 28200, options [mss 1410,sackOK,TS val 4134724217 ecr 0,nop,wscale 7], length 0
18:31:40.413719 IP (tos 0x0, ttl 64, id 10459, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.58921 > x.y.6.13.8472: [bad udp cksum 0xc1ea -> 0x0167!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 60521, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.20.59934 > 10.42.2.133.10060: Flags [S], cksum 0x1b1b (incorrect -> 0x5a97), seq 3105675329, win 28200, options [mss 1410,sackOK,TS val 3752321556 ecr 0,nop,wscale 7], length 0
18:31:40.413731 IP (tos 0x0, ttl 64, id 49024, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.51202 > x.y.6.15.8472: [bad udp cksum 0x835a -> 0x5bb2!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 50959, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.50200 > 10.42.6.244.8080: Flags [S], cksum 0x1fa2 (incorrect -> 0xf7f9), seq 3387187564, win 28200, options [mss 1410,sackOK,TS val 3976119971 ecr 0,nop,wscale 7], length 0
18:31:40.574691 IP (tos 0x0, ttl 64, id 10462, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.52575 > x.y.6.13.8472: [bad udp cksum 0xda7b -> 0xfa46!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 28004, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.38210 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0x3b0d!] 36814+ A? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:40.574705 IP (tos 0x0, ttl 64, id 10463, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.43208 > x.y.6.13.8472: [bad udp cksum 0xff12 -> 0xe741!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 28003, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.40535 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0x0371!] 48698+ AAAA? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:41.314854 IP (tos 0x0, ttl 64, id 11057, offset 0, flags [none], proto UDP (17), length 129)
    x.y.6.8.53224 > x.y.6.13.8472: [bad udp cksum 0xd806 -> 0x25c1!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 6190, offset 0, flags [DF], proto UDP (17), length 79)
    10.42.4.29.46163 > 10.42.2.95.53: [bad udp cksum 0x1b1c -> 0x68d6!] 6262+ AAAA? harbor-database.svc.cluster.local. (51)
18:31:41.314862 IP (tos 0x0, ttl 64, id 11058, offset 0, flags [none], proto UDP (17), length 129)
    x.y.6.8.58594 > x.y.6.13.8472: [bad udp cksum 0xc30c -> 0x77c0!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 6191, offset 0, flags [DF], proto UDP (17), length 79)
    10.42.4.29.36991 > 10.42.2.95.53: [bad udp cksum 0x1b1c -> 0xcfcf!] 61520+ A? harbor-database.svc.cluster.local. (51)
18:31:41.372708 IP (tos 0x0, ttl 64, id 24970, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.44357 > x.y.6.13.8472: [bad udp cksum 0xd2c0 -> 0x3236!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 14662, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.52026 > 10.42.5.25.9100: Flags [S], cksum 0x1dc7 (incorrect -> 0x7d3c), seq 3281077109, win 28200, options [mss 1410,sackOK,TS val 1189413506 ecr 0,nop,wscale 7], length 0
18:31:41.373701 IP (tos 0x0, ttl 64, id 46820, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.39674 > x.y.6.8.8472: [bad udp cksum 0x16a3 -> 0x39aa!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 58500, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.38826 > 10.42.3.26.15020: Flags [S], cksum 0x1bc8 (incorrect -> 0x3ecf), seq 1052086688, win 28200, options [mss 1410,sackOK,TS val 4134725249 ecr 0,nop,wscale 7], length 0
18:31:42.013759 IP (tos 0x0, ttl 64, id 11407, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.42207 > x.y.6.13.8472: [bad udp cksum 0x0327 -> 0x0465!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 56724, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.60048 > 10.42.2.95.9153: Flags [S], cksum 0x1b0d (incorrect -> 0x1c4b), seq 3045481614, win 28200, options [mss 1410,sackOK,TS val 1410608741 ecr 0,nop,wscale 7], length 0
18:31:42.066341 IP (tos 0x0, ttl 64, id 11435, offset 0, flags [none], proto UDP (17), length 155)
    x.y.6.8.46598 > x.y.6.13.8472: [bad udp cksum 0xf1b1 -> 0x521c!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 64, id 24264, offset 0, flags [none], proto UDP (17), length 105)
    10.42.4.0.40919 > 10.42.2.95.53: [bad udp cksum 0x1b19 -> 0x7b83!] 26647+ [1au] A? kubernetes.default.svc.cluster.local. ar: . OPT UDPsize=4096 (77)
18:31:42.398695 IP (tos 0x0, ttl 64, id 25689, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.42976 > x.y.6.13.8472: [bad udp cksum 0xd825 -> 0x3399!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 14663, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.52026 > 10.42.5.25.9100: Flags [S], cksum 0x1dc7 (incorrect -> 0x793a), seq 3281077109, win 28200, options [mss 1410,sackOK,TS val 1189414532 ecr 0,nop,wscale 7], length 0
18:31:42.728110 IP (tos 0x0, ttl 64, id 11438, offset 0, flags [none], proto UDP (17), length 125)
    x.y.6.8.45036 > x.y.6.13.8472: [bad udp cksum 0xf810 -> 0xae85!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 36230, offset 0, flags [DF], proto UDP (17), length 75)
    10.42.4.39.45694 > 10.42.2.95.53: [bad udp cksum 0x1b22 -> 0xd196!] 40806+ A? elasticsearch-master-headless. (47)
18:31:42.728150 IP (tos 0x0, ttl 64, id 11439, offset 0, flags [none], proto UDP (17), length 125)
    x.y.6.8.45036 > x.y.6.13.8472: [bad udp cksum 0xf810 -> 0xb881!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 36231, offset 0, flags [DF], proto UDP (17), length 75)
    10.42.4.39.45694 > 10.42.2.95.53: [bad udp cksum 0x1b22 -> 0xdb92!] 31338+ AAAA? elasticsearch-master-headless. (47)
18:31:42.905819 IP (tos 0x0, ttl 64, id 11567, offset 0, flags [none], proto UDP (17), length 132)
    x.y.6.8.52533 > x.y.6.13.8472: [bad udp cksum 0xdac3 -> 0xc83b!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 43714, offset 0, flags [DF], proto UDP (17), length 82)
    10.42.4.42.55049 > 10.42.2.95.53: [bad udp cksum 0x1b2c -> 0x08a4!] 22039+ A? argocd-catalog.olm.svc.cluster.local. (54)
18:31:42.905830 IP (tos 0x0, ttl 64, id 11568, offset 0, flags [none], proto UDP (17), length 132)
    x.y.6.8.60214 > x.y.6.13.8472: [bad udp cksum 0xbcc2 -> 0x1b0b!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 43715, offset 0, flags [DF], proto UDP (17), length 82)
    10.42.4.42.46850 > 10.42.2.95.53: [bad udp cksum 0x1b2c -> 0x7974!] 1331+ AAAA? argocd-catalog.olm.svc.cluster.local. (54)
18:31:43.421734 IP (tos 0x0, ttl 64, id 47974, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.60159 > x.y.6.8.8472: [bad udp cksum 0xc69d -> 0xe1a4!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 58501, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.38826 > 10.42.3.26.15020: Flags [S], cksum 0x1bc8 (incorrect -> 0x36cf), seq 1052086688, win 28200, options [mss 1410,sackOK,TS val 4134727297 ecr 0,nop,wscale 7], length 0
18:31:43.997495 IP (tos 0x0, ttl 64, id 12275, offset 0, flags [none], proto UDP (17), length 136)
    x.y.6.8.54904 > x.y.6.13.8472: [bad udp cksum 0xd17e -> 0x75ac!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 24193, offset 0, flags [DF], proto UDP (17), length 86)
    10.42.4.44.54096 > 10.42.2.95.53: [bad udp cksum 0x1b32 -> 0xbf5f!] 47430+ A? kubernetes.default.svc.svc.cluster.local. (58)
18:31:43.997495 IP (tos 0x0, ttl 64, id 11569, offset 0, flags [none], proto UDP (17), length 136)
    x.y.6.8.32904 > x.y.6.13.8472: [bad udp cksum 0x276f -> 0x553d!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 24192, offset 0, flags [DF], proto UDP (17), length 86)
    10.42.4.44.33434 > 10.42.2.95.53: [bad udp cksum 0x1b32 -> 0x4900!] 32833+ AAAA? kubernetes.default.svc.svc.cluster.local. (58)
18:31:44.044740 IP (tos 0x0, ttl 64, id 12276, offset 0, flags [none], proto UDP (17), length 146)
    x.y.6.8.34213 > x.y.6.13.8472: [bad udp cksum 0x2237 -> 0x3282!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 38393, offset 0, flags [DF], proto UDP (17), length 96)
    10.42.4.27.43983 > 10.42.2.95.53: [bad udp cksum 0x1b2b -> 0x2b76!] 43071+ A? longhorn-backend.longhorn-system.svc.cluster.local. (68)
18:31:44.044776 IP (tos 0x0, ttl 64, id 12277, offset 0, flags [none], proto UDP (17), length 146)
    x.y.6.8.34213 > x.y.6.13.8472: [bad udp cksum 0x2237 -> 0xfce1!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 38394, offset 0, flags [DF], proto UDP (17), length 96)
    10.42.4.27.43983 > 10.42.2.95.53: [bad udp cksum 0x1b2b -> 0xf5d5!] 56772+ AAAA? longhorn-backend.longhorn-system.svc.cluster.local. (68)
18:31:44.446708 IP (tos 0x0, ttl 64, id 26555, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.41076 > x.y.6.13.8472: [bad udp cksum 0xdf91 -> 0x3305!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 14664, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.52026 > 10.42.5.25.9100: Flags [S], cksum 0x1dc7 (incorrect -> 0x713a), seq 3281077109, win 28200, options [mss 1410,sackOK,TS val 1189416580 ecr 0,nop,wscale 7], length 0
18:31:45.201165 IP (tos 0x0, ttl 64, id 49286, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.54925 > x.y.6.8.8472: [bad udp cksum 0xdb11 -> 0x8d76!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 10289, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.44992 > 10.42.3.28.7472: Flags [S], cksum 0x1bca (incorrect -> 0xce2e), seq 3488503207, win 28200, options [mss 1410,sackOK,TS val 2968603847 ecr 0,nop,wscale 7], length 0
18:31:45.575594 IP (tos 0x0, ttl 64, id 12641, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.42176 > x.y.6.13.8472: [bad udp cksum 0x031b -> 0x3a68!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 29419, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.58625 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0x528f!] 10381+ A? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:45.575594 IP (tos 0x0, ttl 64, id 12640, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.45173 > x.y.6.13.8472: [bad udp cksum 0xf765 -> 0x045e!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 29420, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.34035 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0x283a!] 45781+ AAAA? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:46.238703 IP (tos 0x0, ttl 64, id 50167, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.43592 > x.y.6.8.8472: [bad udp cksum 0x0757 -> 0xb5ad!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 10290, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.44992 > 10.42.3.28.7472: Flags [S], cksum 0x1bca (incorrect -> 0xca20), seq 3488503207, win 28200, options [mss 1410,sackOK,TS val 2968604885 ecr 0,nop,wscale 7], length 0
18:31:46.318907 IP (tos 0x0, ttl 64, id 13104, offset 0, flags [none], proto UDP (17), length 125)
    x.y.6.8.45800 > x.y.6.13.8472: [bad udp cksum 0xf50a -> 0x9003!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 11126, offset 0, flags [DF], proto UDP (17), length 75)
    10.42.4.29.34412 > 10.42.2.95.53: [bad udp cksum 0x1b18 -> 0xb610!] 36353+ A? harbor-database.cluster.local. (47)
18:31:46.318908 IP (tos 0x0, ttl 64, id 13103, offset 0, flags [none], proto UDP (17), length 125)
    x.y.6.8.52516 > x.y.6.13.8472: [bad udp cksum 0xdace -> 0xc8bc!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 11125, offset 0, flags [DF], proto UDP (17), length 75)
    10.42.4.29.40094 > 10.42.2.95.53: [bad udp cksum 0x1b18 -> 0x0906!] 2522+ AAAA? harbor-database.cluster.local. (47)
18:31:46.558477 IP (tos 0x0, ttl 64, id 13298, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.40017 > x.y.6.13.8472: [bad udp cksum 0x0bc3 -> 0xc0c4!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 20281, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.20.51392 > 10.42.2.133.10045: Flags [S], cksum 0x1b1b (incorrect -> 0xd01c), seq 570360647, win 28200, options [mss 1410,sackOK,TS val 3752327700 ecr 0,nop,wscale 7], length 0
18:31:46.576182 IP (tos 0x0, ttl 64, id 13315, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.52506 > x.y.6.13.8472: [bad udp cksum 0xdac0 -> 0x102c!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 29704, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.44760 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0x50ad!] 24701+ AAAA? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:46.576190 IP (tos 0x0, ttl 64, id 13316, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.35484 > x.y.6.13.8472: [bad udp cksum 0x1d3f -> 0xb706!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 29705, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.52774 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0xb509!] 56557+ A? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:47.066460 IP (tos 0x0, ttl 64, id 13591, offset 0, flags [none], proto UDP (17), length 155)
    x.y.6.8.46598 > x.y.6.13.8472: [bad udp cksum 0xf1b1 -> 0x521c!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 64, id 28248, offset 0, flags [none], proto UDP (17), length 105)
    10.42.4.0.40919 > 10.42.2.95.53: [bad udp cksum 0x1b19 -> 0x7b83!] 26647+ [1au] A? kubernetes.default.svc.cluster.local. ar: . OPT UDPsize=4096 (77)
18:31:47.453740 IP (tos 0x0, ttl 64, id 50668, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.55618 > x.y.6.8.8472: [bad udp cksum 0xd85a -> 0xe3a1!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 58502, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.38826 > 10.42.3.26.15020: Flags [S], cksum 0x1bc8 (incorrect -> 0x270f), seq 1052086688, win 28200, options [mss 1410,sackOK,TS val 4134731329 ecr 0,nop,wscale 7], length 0
18:31:47.582690 IP (tos 0x0, ttl 64, id 13878, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.44860 > x.y.6.13.8472: [bad udp cksum 0xf8d7 -> 0x7781!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 42842, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.20.39994 > 10.42.2.133.10075: Flags [S], cksum 0x1b1b (incorrect -> 0x99c4), seq 3258499532, win 28200, options [mss 1410,sackOK,TS val 3752328725 ecr 0,nop,wscale 7], length 0
18:31:47.730173 IP (tos 0x0, ttl 64, id 13956, offset 0, flags [none], proto UDP (17), length 125)
    x.y.6.8.45036 > x.y.6.13.8472: [bad udp cksum 0xf810 -> 0xae85!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 39445, offset 0, flags [DF], proto UDP (17), length 75)
    10.42.4.39.45694 > 10.42.2.95.53: [bad udp cksum 0x1b22 -> 0xd196!] 40806+ A? elasticsearch-master-headless. (47)
18:31:47.730215 IP (tos 0x0, ttl 64, id 13957, offset 0, flags [none], proto UDP (17), length 125)
    x.y.6.8.45036 > x.y.6.13.8472: [bad udp cksum 0xf810 -> 0xb881!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 39446, offset 0, flags [DF], proto UDP (17), length 75)
    10.42.4.39.45694 > 10.42.2.95.53: [bad udp cksum 0x1b22 -> 0xdb92!] 31338+ AAAA? elasticsearch-master-headless. (47)
18:31:47.906525 IP (tos 0x0, ttl 64, id 14089, offset 0, flags [none], proto UDP (17), length 118)
    x.y.6.8.40201 > x.y.6.13.8472: [bad udp cksum 0x0afe -> 0x5a07!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45836, offset 0, flags [DF], proto UDP (17), length 68)
    10.42.4.42.34081 > 10.42.2.95.53: [bad udp cksum 0x1b1e -> 0x6a27!] 16800+ AAAA? argocd-catalog.olm.svc. (40)
18:31:47.906548 IP (tos 0x0, ttl 64, id 14090, offset 0, flags [none], proto UDP (17), length 118)
    x.y.6.8.56042 > x.y.6.13.8472: [bad udp cksum 0xcd1c -> 0x9315!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45837, offset 0, flags [DF], proto UDP (17), length 68)
    10.42.4.42.38415 > 10.42.2.95.53: [bad udp cksum 0x1b1e -> 0xe116!] 47581+ A? argocd-catalog.olm.svc. (40)
18:31:48.285688 IP (tos 0x0, ttl 64, id 51383, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.45082 > x.y.6.8.8472: [bad udp cksum 0x0185 -> 0xa7dc!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 10291, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.44992 > 10.42.3.28.7472: Flags [S], cksum 0x1bca (incorrect -> 0xc221), seq 3488503207, win 28200, options [mss 1410,sackOK,TS val 2968606932 ecr 0,nop,wscale 7], length 0
18:31:48.307677 IP (tos 0x0, ttl 64, id 54341, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.51452 > x.y.6.15.8472: [bad udp cksum 0x8260 -> 0xda8c!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 38897, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.50694 > 10.42.6.244.8080: Flags [S], cksum 0x1fa2 (incorrect -> 0x77ce), seq 1323056093, win 28200, options [mss 1410,sackOK,TS val 3976127864 ecr 0,nop,wscale 7], length 0
18:31:48.477718 IP (tos 0x0, ttl 64, id 26907, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.36215 > x.y.6.13.8472: [bad udp cksum 0xf28e -> 0x3643!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 14665, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.52026 > 10.42.5.25.9100: Flags [S], cksum 0x1dc7 (incorrect -> 0x617b), seq 3281077109, win 28200, options [mss 1410,sackOK,TS val 1189420611 ecr 0,nop,wscale 7], length 0
18:31:48.997806 IP (tos 0x0, ttl 64, id 14915, offset 0, flags [none], proto UDP (17), length 136)
    x.y.6.8.35592 > x.y.6.13.8472: [bad udp cksum 0x1cef -> 0x0918!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 24308, offset 0, flags [DF], proto UDP (17), length 86)
    10.42.4.44.47169 > 10.42.2.95.53: [bad udp cksum 0x1b32 -> 0x075b!] 35930+ A? kubernetes.default.svc.svc.cluster.local. (58)
18:31:48.997806 IP (tos 0x0, ttl 64, id 14916, offset 0, flags [none], proto UDP (17), length 136)
    x.y.6.8.33297 > x.y.6.13.8472: [bad udp cksum 0x25e6 -> 0x9d94!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 24307, offset 0, flags [DF], proto UDP (17), length 86)
    10.42.4.44.53112 > 10.42.2.95.53: [bad udp cksum 0x1b32 -> 0x92e0!] 59778+ AAAA? kubernetes.default.svc.svc.cluster.local. (58)
18:31:49.046339 IP (tos 0x0, ttl 64, id 14946, offset 0, flags [none], proto UDP (17), length 112)
    x.y.6.8.36938 > x.y.6.13.8472: [bad udp cksum 0x17b4 -> 0xee4f!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 42574, offset 0, flags [DF], proto UDP (17), length 62)
    10.42.4.27.60779 > 10.42.2.95.53: [bad udp cksum 0x1b09 -> 0xf1a4!] 45157+ A? longhorn-backend. (34)
18:31:49.046374 IP (tos 0x0, ttl 64, id 14947, offset 0, flags [none], proto UDP (17), length 112)
    x.y.6.8.36938 > x.y.6.13.8472: [bad udp cksum 0x17b4 -> 0x6b81!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 42575, offset 0, flags [DF], proto UDP (17), length 62)
    10.42.4.27.60779 > 10.42.2.95.53: [bad udp cksum 0x1b09 -> 0x6ed6!] 13081+ AAAA? longhorn-backend. (34)
18:31:49.309708 IP (tos 0x0, ttl 64, id 54362, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.58694 > x.y.6.15.8472: [bad udp cksum 0x6616 -> 0xba57!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 38898, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.50694 > 10.42.6.244.8080: Flags [S], cksum 0x1fa2 (incorrect -> 0x73e3), seq 1323056093, win 28200, options [mss 1410,sackOK,TS val 3976128867 ecr 0,nop,wscale 7], length 0
18:31:49.887474 IP (tos 0x0, ttl 64, id 15138, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.44820 > x.y.6.13.8472: [bad udp cksum 0xf8f1 -> 0x14e2!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45115, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.60544 > 10.42.2.95.9153: Flags [S], cksum 0x1b0d (incorrect -> 0x36fd), seq 1841326321, win 28200, options [mss 1410,sackOK,TS val 1410616614 ecr 0,nop,wscale 7], length 0
18:31:50.909735 IP (tos 0x0, ttl 64, id 15293, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.37675 > x.y.6.13.8472: [bad udp cksum 0x14db -> 0x2ccc!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45116, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.60544 > 10.42.2.95.9153: Flags [S], cksum 0x1b0d (incorrect -> 0x32fe), seq 1841326321, win 28200, options [mss 1410,sackOK,TS val 1410617637 ecr 0,nop,wscale 7], length 0
18:31:51.322808 IP (tos 0x0, ttl 64, id 15518, offset 0, flags [none], proto UDP (17), length 125)
    x.y.6.8.34819 > x.y.6.13.8472: [bad udp cksum 0x1ff0 -> 0xeefe!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 13263, offset 0, flags [DF], proto UDP (17), length 75)
    10.42.4.29.47266 > 10.42.2.95.53: [bad udp cksum 0x1b18 -> 0xea26!] 3253+ AAAA? harbor-database.cluster.local. (47)
18:31:51.322815 IP (tos 0x0, ttl 64, id 15519, offset 0, flags [none], proto UDP (17), length 125)
    x.y.6.8.40671 > x.y.6.13.8472: [bad udp cksum 0x0914 -> 0x1cda!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 13264, offset 0, flags [DF], proto UDP (17), length 75)
    10.42.4.29.38559 > 10.42.2.95.53: [bad udp cksum 0x1b18 -> 0x2ede!] 1281+ A? harbor-database.cluster.local. (47)
18:31:51.357739 IP (tos 0x0, ttl 64, id 54381, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.48264 > x.y.6.15.8472: [bad udp cksum 0x8ed4 -> 0xdb15!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 38899, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.50694 > 10.42.6.244.8080: Flags [S], cksum 0x1fa2 (incorrect -> 0x6be3), seq 1323056093, win 28200, options [mss 1410,sackOK,TS val 3976130915 ecr 0,nop,wscale 7], length 0
18:31:52.317741 IP (tos 0x0, ttl 64, id 54904, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.41754 > x.y.6.8.8472: [bad udp cksum 0x0e85 -> 0xa51c!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 10292, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.44992 > 10.42.3.28.7472: Flags [S], cksum 0x1bca (incorrect -> 0xb261), seq 3488503207, win 28200, options [mss 1410,sackOK,TS val 2968610964 ecr 0,nop,wscale 7], length 0
18:31:52.491625 IP (tos 0x0, ttl 64, id 16375, offset 0, flags [none], proto UDP (17), length 147)
    x.y.6.8.57822 > x.y.6.13.8472: [bad udp cksum 0xc60b -> 0xb619!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 48054, offset 0, flags [DF], proto UDP (17), length 97)
    10.42.4.42.60270 > 10.42.2.95.53: [bad udp cksum 0x1b3b -> 0x0b49!] 54941+ A? operatorhubio-catalog.olm.svc.olm.svc.cluster.local. (69)
18:31:52.491636 IP (tos 0x0, ttl 64, id 16374, offset 0, flags [none], proto UDP (17), length 147)
    x.y.6.8.52381 > x.y.6.13.8472: [bad udp cksum 0xdb4c -> 0x9eca!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 48053, offset 0, flags [DF], proto UDP (17), length 97)
    10.42.4.42.45898 > 10.42.2.95.53: [bad udp cksum 0x1b3b -> 0xdeb8!] 8274+ AAAA? operatorhubio-catalog.olm.svc.olm.svc.cluster.local. (69)
18:31:52.577764 IP (tos 0x0, ttl 64, id 16376, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.46621 > x.y.6.13.8472: [bad udp cksum 0xf1bd -> 0x7778!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 34959, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.56923 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0xa0fc!] 57541+ A? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:52.577765 IP (tos 0x0, ttl 64, id 16439, offset 0, flags [none], proto UDP (17), length 158)
    x.y.6.8.60735 > x.y.6.13.8472: [bad udp cksum 0xba9b -> 0x9199!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 34960, offset 0, flags [DF], proto UDP (17), length 108)
    10.42.4.38.59939 > 10.42.2.95.53: [bad udp cksum 0x1b42 -> 0xf23f!] 33695+ AAAA? argocd-redis.argocd.svc.cluster.local.argocd.svc.cluster.local. (80)
18:31:52.907573 IP (tos 0x0, ttl 64, id 16576, offset 0, flags [none], proto UDP (17), length 118)
    x.y.6.8.47077 > x.y.6.13.8472: [bad udp cksum 0xf021 -> 0xad32!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 48414, offset 0, flags [DF], proto UDP (17), length 68)
    10.42.4.42.52657 > 10.42.2.95.53: [bad udp cksum 0x1b1e -> 0xd82e!] 35592+ AAAA? argocd-catalog.olm.svc. (40)
18:31:52.907573 IP (tos 0x0, ttl 64, id 16577, offset 0, flags [none], proto UDP (17), length 118)
    x.y.6.8.50312 > x.y.6.13.8472: [bad udp cksum 0xe37e -> 0xf67b!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 48415, offset 0, flags [DF], proto UDP (17), length 68)
    10.42.4.42.48006 > 10.42.2.95.53: [bad udp cksum 0x1b1e -> 0x2e1b!] 18274+ A? argocd-catalog.olm.svc. (40)
18:31:52.957740 IP (tos 0x0, ttl 64, id 16599, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.34198 > x.y.6.13.8472: [bad udp cksum 0x2270 -> 0x3261!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45117, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.44.60544 > 10.42.2.95.9153: Flags [S], cksum 0x1b0d (incorrect -> 0x2afe), seq 1841326321, win 28200, options [mss 1410,sackOK,TS val 1410619685 ecr 0,nop,wscale 7], length 0
18:31:53.294921 IP (tos 0x0, ttl 64, id 16778, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.54824 > x.y.6.13.8472: [bad udp cksum 0xd1eb -> 0x99ba!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 2844, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.20.53108 > 10.42.2.133.10030: Flags [S], cksum 0x1b1b (incorrect -> 0xe2e9), seq 380164826, win 28200, options [mss 1410,sackOK,TS val 3752334437 ecr 0,nop,wscale 7], length 0
<18:31:53.998862 IP (tos 0x0, ttl 64, id 16925, offset 0, flags [none], proto UDP (17), length 132)
    x.y.6.8.35945 > x.y.6.13.8472: [bad udp cksum 0x1b92 -> 0x0339!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 27408, offset 0, flags [DF], proto UDP (17), length 82)
    10.42.4.44.33058 > 10.42.2.95.53: [bad udp cksum 0x1b2e -> 0x02d5!] 40550+ AAAA? kubernetes.default.svc.cluster.local. (54)
18:31:53.998863 IP (tos 0x0, ttl 64, id 16926, offset 0, flags [none], proto UDP (17), length 132)
    x.y.6.8.43627 > x.y.6.13.8472: [bad udp cksum 0xfd8f -> 0xe8d4!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 27409, offset 0, flags [DF], proto UDP (17), length 82)
    10.42.4.44.42524 > 10.42.2.95.53: [bad udp cksum 0x1b2e -> 0x0673!] 30185+ A? kubernetes.default.svc.cluster.local. (54)
18:31:54.051497 IP (tos 0x0, ttl 64, id 16962, offset 0, flags [none], proto UDP (17), length 112)
    x.y.6.8.36938 > x.y.6.13.8472: [bad udp cksum 0x17b4 -> 0xee4f!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 43966, offset 0, flags [DF], proto UDP (17), length 62)
    10.42.4.27.60779 > 10.42.2.95.53: [bad udp cksum 0x1b09 -> 0xf1a4!] 45157+ A? longhorn-backend. (34)
18:31:54.051535 IP (tos 0x0, ttl 64, id 16963, offset 0, flags [none], proto UDP (17), length 112)
    x.y.6.8.36938 > x.y.6.13.8472: [bad udp cksum 0x17b4 -> 0x6b81!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 43967, offset 0, flags [DF], proto UDP (17), length 62)
    10.42.4.27.60779 > 10.42.2.95.53: [bad udp cksum 0x1b09 -> 0x6ed6!] 13081+ AAAA? longhorn-backend. (34)
18:31:54.301710 IP (tos 0x0, ttl 64, id 17102, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.8.49614 > x.y.6.13.8472: [bad udp cksum 0xe645 -> 0xaa25!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 2845, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.4.20.53108 > 10.42.2.133.10030: Flags [S], cksum 0x1b1b (incorrect -> 0xdefa), seq 380164826, win 28200, options [mss 1410,sackOK,TS val 3752335444 ecr 0,nop,wscale 7], length 0
^C
83 packets captured
84 packets received by filter
0 packets dropped by kernel

Node 02

[root@vldsocfg02-node ~]# sudo tcpdump -vvvnni eth0 port 8472
dropped privs to tcpdump
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
18:31:35.167858 IP (tos 0x0, ttl 64, id 16736, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.33227 > x.y.6.8.8472: [bad udp cksum 0x263d -> 0xaf58!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 146, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51384 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0xa42a), seq 3734209827, win 28200, options [mss 1410,sackOK,TS val 1274907993 ecr 0,nop,wscale 7], length 0
18:31:35.772205 IP (tos 0x0, ttl 64, id 18598, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.45452 > x.y.6.15.8472: [bad udp cksum 0xa585 -> 0x42aa!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 27417, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.96.44508 > 10.42.6.247.9300: Flags [S], cksum 0x1dd9 (incorrect -> 0xbafd), seq 3192708789, win 28200, options [mss 1410,sackOK,TS val 2575497387 ecr 0,nop,wscale 7], length 0
18:31:36.284190 IP (tos 0x0, ttl 64, id 18653, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.40090 > x.y.6.15.8472: [bad udp cksum 0xba79 -> 0x5a46!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 44897, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.58660 > 10.42.6.244.8080: Flags [S], cksum 0x1ddb (incorrect -> 0xbda7), seq 773670146, win 28200, options [mss 1410,sackOK,TS val 2772427169 ecr 0,nop,wscale 7], length 0
18:31:36.732201 IP (tos 0x0, ttl 64, id 17939, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.43382 > x.y.6.8.8472: [bad udp cksum 0xfe8b -> 0x2464!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 41220, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.96.55762 > 10.42.4.39.9300: Flags [S], cksum 0x1b09 (incorrect -> 0x40e1), seq 2146426964, win 28200, options [mss 1410,sackOK,TS val 3229900638 ecr 0,nop,wscale 7], length 0
18:31:37.244239 IP (tos 0x0, ttl 64, id 19177, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.47391 > x.y.6.15.8472: [bad udp cksum 0x9df0 -> 0x230c!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 28883, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.42534 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0xa2f2), seq 122608995, win 28200, options [mss 1410,sackOK,TS val 2022568579 ecr 0,nop,wscale 7], length 0
18:31:38.169957 IP (tos 0x0, ttl 64, id 18205, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.58691 > x.y.6.8.8472: [bad udp cksum 0xc2c4 -> 0x17cc!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 11867, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51488 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0x7016), seq 3874792628, win 28200, options [mss 1410,sackOK,TS val 1274910995 ecr 0,nop,wscale 7], length 0
18:31:39.228203 IP (tos 0x0, ttl 64, id 19108, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.60293 > x.y.6.8.8472: [bad udp cksum 0xbc82 -> 0x0d67!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 11868, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51488 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0x6bf3), seq 3874792628, win 28200, options [mss 1410,sackOK,TS val 1274912054 ecr 0,nop,wscale 7], length 0
18:31:40.111575 IP (tos 0x0, ttl 64, id 19748, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.54923 > x.y.6.15.8472: [bad udp cksum 0x8084 -> 0x5175!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45405, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.42804 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0xeec7), seq 629499670, win 28200, options [mss 1410,sackOK,TS val 2022571446 ecr 0,nop,wscale 7], length 0
18:31:40.316194 IP (tos 0x0, ttl 64, id 19870, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.33114 > x.y.6.15.8472: [bad udp cksum 0xd5b9 -> 0x65c6!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 44898, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.58660 > 10.42.6.244.8080: Flags [S], cksum 0x1ddb (incorrect -> 0xade7), seq 773670146, win 28200, options [mss 1410,sackOK,TS val 2772431201 ecr 0,nop,wscale 7], length 0
18:31:41.148196 IP (tos 0x0, ttl 64, id 20492, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.37690 > x.y.6.15.8472: [bad udp cksum 0xc3d5 -> 0x90b9!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45406, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.42804 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0xeaba), seq 629499670, win 28200, options [mss 1410,sackOK,TS val 2022572483 ecr 0,nop,wscale 7], length 0
18:31:42.171457 IP (tos 0x0, ttl 64, id 21787, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.33634 > x.y.6.8.8472: [bad udp cksum 0x24a6 -> 0x40ab!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 36955, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51606 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0x3714), seq 1042577007, win 28200, options [mss 1410,sackOK,TS val 1274914997 ecr 0,nop,wscale 7], length 0
18:31:42.556224 IP (tos 0x0, ttl 64, id 21957, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.50367 > x.y.6.8.8472: [bad udp cksum 0xe34c -> 0x7374!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 63158, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.99.33708 > 10.42.4.46.10060: Flags [S], cksum 0x1b13 (incorrect -> 0xab3a), seq 1488411840, win 28200, options [mss 1410,sackOK,TS val 1105752466 ecr 0,nop,wscale 7], length 0
18:31:43.196203 IP (tos 0x0, ttl 64, id 21321, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.59075 > x.y.6.15.8472: [bad udp cksum 0x704c -> 0x3530!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45407, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.42804 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0xe2ba), seq 629499670, win 28200, options [mss 1410,sackOK,TS val 2022574531 ecr 0,nop,wscale 7], length 0
18:31:43.196204 IP (tos 0x0, ttl 64, id 22386, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.53089 > x.y.6.8.8472: [bad udp cksum 0xd8a6 -> 0xf0aa!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 36956, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51606 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0x3313), seq 1042577007, win 28200, options [mss 1410,sackOK,TS val 1274916022 ecr 0,nop,wscale 7], length 0
18:31:43.542945 IP (tos 0x0, ttl 64, id 21434, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.45187 > x.y.6.15.8472: [bad udp cksum 0xa689 -> 0x3c4b!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 44385, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.37314 > 10.42.6.237.3000: Flags [S], cksum 0x1dd4 (incorrect -> 0xb395), seq 1000064030, win 28200, options [mss 1410,sackOK,TS val 2456672556 ecr 0,nop,wscale 7], length 0
18:31:44.092198 IP (tos 0x0, ttl 64, id 21730, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.42198 > x.y.6.15.8472: [bad udp cksum 0xb23b -> 0x2ee0!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 27418, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.96.44508 > 10.42.6.247.9300: Flags [S], cksum 0x1dd9 (incorrect -> 0x9a7d), seq 3192708789, win 28200, options [mss 1410,sackOK,TS val 2575505707 ecr 0,nop,wscale 7], length 0
18:31:44.604219 IP (tos 0x0, ttl 64, id 22089, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.52404 > x.y.6.15.8472: [bad udp cksum 0x8a58 -> 0x1bf4!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 44386, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.37314 > 10.42.6.237.3000: Flags [S], cksum 0x1dd4 (incorrect -> 0xaf6f), seq 1000064030, win 28200, options [mss 1410,sackOK,TS val 2456673618 ecr 0,nop,wscale 7], length 0
18:31:45.116204 IP (tos 0x0, ttl 64, id 23637, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.33728 > x.y.6.8.8472: [bad udp cksum 0x2442 -> 0x295a!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 41221, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.96.55762 > 10.42.4.39.9300: Flags [S], cksum 0x1b09 (incorrect -> 0x2021), seq 2146426964, win 28200, options [mss 1410,sackOK,TS val 3229909022 ecr 0,nop,wscale 7], length 0
18:31:45.244245 IP (tos 0x0, ttl 64, id 23730, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.39778 > x.y.6.8.8472: [bad udp cksum 0x0ca6 -> 0x1caa!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 36957, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51606 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0x2b13), seq 1042577007, win 28200, options [mss 1410,sackOK,TS val 1274918070 ecr 0,nop,wscale 7], length 0
18:31:46.653199 IP (tos 0x0, ttl 64, id 23862, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.37717 > x.y.6.15.8472: [bad udp cksum 0xc3b7 -> 0x4d52!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 44387, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.37314 > 10.42.6.237.3000: Flags [S], cksum 0x1dd4 (incorrect -> 0xa76e), seq 1000064030, win 28200, options [mss 1410,sackOK,TS val 2456675667 ecr 0,nop,wscale 7], length 0
18:31:47.228226 IP (tos 0x0, ttl 64, id 24106, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.49451 > x.y.6.15.8472: [bad udp cksum 0x95e4 -> 0x4b08!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 45408, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.42804 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0xd2fa), seq 629499670, win 28200, options [mss 1410,sackOK,TS val 2022578563 ecr 0,nop,wscale 7], length 0
18:31:48.173224 IP (tos 0x0, ttl 64, id 25704, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.49666 > x.y.6.8.8472: [bad udp cksum 0xe605 -> 0xded8!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 49983, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51752 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0x13e2), seq 1446668679, win 28200, options [mss 1410,sackOK,TS val 1274920999 ecr 0,nop,wscale 7], length 0
18:31:49.212236 IP (tos 0x0, ttl 64, id 26009, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.55010 > x.y.6.8.8472: [bad udp cksum 0xd125 -> 0xc5e9!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 49984, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51752 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0x0fd3), seq 1446668679, win 28200, options [mss 1410,sackOK,TS val 1274922038 ecr 0,nop,wscale 7], length 0
18:31:50.112411 IP (tos 0x0, ttl 64, id 24151, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.40536 > x.y.6.15.8472: [bad udp cksum 0xb8b7 -> 0xb68f!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 57247, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.43072 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0x1baf), seq 2391451916, win 28200, options [mss 1410,sackOK,TS val 2022581447 ecr 0,nop,wscale 7], length 0
18:31:50.561148 IP (tos 0x0, ttl 64, id 26772, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.58641 > x.y.6.8.8472: [bad udp cksum 0xc2ec -> 0x3381!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 18952, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.114.39370 > 10.42.4.17.50051: Flags [S], cksum 0x1b05 (incorrect -> 0x8b99), seq 3632745089, win 28200, options [mss 1410,sackOK,TS val 2804747285 ecr 0,nop,wscale 7], length 0
18:31:50.684198 IP (tos 0x0, ttl 64, id 24332, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.52726 > x.y.6.15.8472: [bad udp cksum 0x8916 -> 0x02f2!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 44388, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.37314 > 10.42.6.237.3000: Flags [S], cksum 0x1dd4 (incorrect -> 0x97af), seq 1000064030, win 28200, options [mss 1410,sackOK,TS val 2456679698 ecr 0,nop,wscale 7], length 0
18:31:51.133219 IP (tos 0x0, ttl 64, id 24426, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.59898 > x.y.6.15.8472: [bad udp cksum 0x6d15 -> 0x66f0!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 57248, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.43072 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0x17b2), seq 2391451916, win 28200, options [mss 1410,sackOK,TS val 2022582468 ecr 0,nop,wscale 7], length 0
18:31:51.260197 IP (tos 0x0, ttl 64, id 27102, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.42711 > x.y.6.8.8472: [bad udp cksum 0x0131 -> 0xedf4!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 49985, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51752 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0x07d3), seq 1446668679, win 28200, options [mss 1410,sackOK,TS val 1274924086 ecr 0,nop,wscale 7], length 0
18:31:51.581195 IP (tos 0x0, ttl 64, id 27402, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.34593 > x.y.6.8.8472: [bad udp cksum 0x20dd -> 0x8d74!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 18953, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.114.39370 > 10.42.4.17.50051: Flags [S], cksum 0x1b05 (incorrect -> 0x879c), seq 3632745089, win 28200, options [mss 1410,sackOK,TS val 2804748306 ecr 0,nop,wscale 7], length 0
18:31:53.180221 IP (tos 0x0, ttl 64, id 24670, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.52459 > x.y.6.15.8472: [bad udp cksum 0x8a24 -> 0x7c00!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 57249, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.43072 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0x0fb3), seq 2391451916, win 28200, options [mss 1410,sackOK,TS val 2022584515 ecr 0,nop,wscale 7], length 0
18:31:53.628199 IP (tos 0x0, ttl 64, id 28456, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.59202 > x.y.6.8.8472: [bad udp cksum 0xc0bb -> 0x2554!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 18954, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.114.39370 > 10.42.4.17.50051: Flags [S], cksum 0x1b05 (incorrect -> 0x7f9d), seq 3632745089, win 28200, options [mss 1410,sackOK,TS val 2804750353 ecr 0,nop,wscale 7], length 0
18:31:55.293229 IP (tos 0x0, ttl 64, id 29196, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.58288 > x.y.6.8.8472: [bad udp cksum 0xc457 -> 0xa15a!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 49986, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.108.51752 > 10.42.4.33.5432: Flags [S], cksum 0x1b0f (incorrect -> 0xf811), seq 1446668679, win 28200, options [mss 1410,sackOK,TS val 1274928119 ecr 0,nop,wscale 7], length 0
18:31:55.702148 IP (tos 0x0, ttl 64, id 25327, offset 0, flags [none], proto UDP (17), length 110)
    x.y.6.13.37437 > x.y.6.15.8472: [bad udp cksum 0xc4d2 -> 0x6c63!] OTV, flags [I] (0x08), overlay 0, instance 1
IP (tos 0x0, ttl 63, id 52150, offset 0, flags [DF], proto TCP (6), length 60)
    10.42.2.101.43222 > 10.42.6.240.9093: Flags [S], cksum 0x1dd7 (incorrect -> 0xc567), seq 312042714, win 28200, options [mss 1410,sackOK,TS val 2022587036 ecr 0,nop,wscale 7], length 0
^C
33 packets captured
34 packets received by filter
0 packets dropped by kernel
[root@vldsocfg02-node ~]# 

@manuelbuil
Copy link
Contributor

Oh ==> bad udp cksum 0xf1b1 -> 0x521c!,

you might be hitting a kernel bug that affects udp + vxlan when using the offloading feature of the kernel. We saw it in Ubuntu but thought it was fixed in RHEL ==> rancher/rke2#1541

Could you please try disabling the offloading in all nodes? Execute this command sudo ethtool -K flannel.1 tx-checksum-ip-generic off and try again

@chris93111
Copy link
Author

worked !

[root@vldsocfg01-node ~]# dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: f6c047e0da67c246 (echoed)
;; QUESTION SECTION:
;kubernetes.default.svc.cluster.local. IN A

;; ANSWER SECTION:
kubernetes.default.svc.cluster.local. 5	IN A	10.43.0.1

;; Query time: 0 msec
;; SERVER: 10.43.0.10#53(10.43.0.10)
;; WHEN: Mon Jan 31 19:27:18 CET 2022
;; MSG SIZE  rcvd: 129

[root@vldsocfg01-node ~]

@manuelbuil Thanks for helping me debug
I have see and tried this fix but I must have made a mistake on the command

thank you !

@manuelbuil
Copy link
Contributor

worked !

[root@vldsocfg01-node ~]# dig @10.43.0.10 kubernetes.default.svc.cluster.local

; <<>> DiG 9.11.26-RedHat-9.11.26-4.el8_4 <<>> @10.43.0.10 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: f6c047e0da67c246 (echoed)
;; QUESTION SECTION:
;kubernetes.default.svc.cluster.local. IN A

;; ANSWER SECTION:
kubernetes.default.svc.cluster.local. 5	IN A	10.43.0.1

;; Query time: 0 msec
;; SERVER: 10.43.0.10#53(10.43.0.10)
;; WHEN: Mon Jan 31 19:27:18 CET 2022
;; MSG SIZE  rcvd: 129

[root@vldsocfg01-node ~]

@manuelbuil Thanks for helping me debug I have see and tried this fix but I must have made a mistake on the command

thank you !

Thanks for helping and your quick response! Something we need to fix in flannel upstream

@manuelbuil
Copy link
Contributor

Note that there are issues with RHEL 8 and vmware. There is one related to vxlan which maybe is the root cause for our issue ==> https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202111001.html#esxi670-202111401-bg-resolved

@abstractalchemist
Copy link

I think I'm having a very similar issue, but it's a single host deployment of k3s onto a RHEL 8.4 system. The node is deployed in AWS EC2, and I'm just trying to get a light cluster started up using k3s. However, none of the pods that I deploy to k3s can reach the apiserver ( including coredns it appears ).

@manuelbuil
Copy link
Contributor

I think I'm having a very similar issue, but it's a single host deployment of k3s onto a RHEL 8.4 system. The node is deployed in AWS EC2, and I'm just trying to get a light cluster started up using k3s. However, none of the pods that I deploy to k3s can reach the apiserver ( including coredns it appears ).

HI! Can you give us the output of:

kubectl get pods -A -o wide
brctl show
ip r

@abstractalchemist
Copy link

Sorry, maybe I did something wrong. I created a new instance, disabled nm-cloud-setup.service and nm-cloud-setup.timer, restarted the instance, and installed k3s and now it seems to work,. I'm pretty sure the last time I disabled both services and it still wasn't functioning.

@jgerry2002
Copy link

Same issue still happening with RHEL8.6. Pod communication entirely broken.

added the above fix to crontab to band-aid things so it survives reboot.

@reboot ethtool -K flannel.1 tx-checksum-ip-generic off

This fix should be posted in the readme to avoid headaches. It took a bit of digging to find this issue.

@brandond
Copy link
Contributor

brandond commented Jul 6, 2022

Is EL8 still shipping a kernel with broken vxlan tx checksum offload?

@jgerry2002
Copy link

Is EL8 still shipping a kernel with broken vxlan tx checksum offload?

Yes it is. Not unusual for Redhat since they typically move at the speed of molasses when making changes. I had three new VMs with a fresh install of 8.6 and still broken. Same bad udp issue when I used tcpdump. The ethtool change fixed it.

@adamlamar
Copy link

I can confirm the problem and ethtool fix on RHEL 8.6. The following have known broken kernels on VMWare vSphere with the vxlan tx checksum offload bug:

  • RHEL 8.3
  • RHEL 8.6

Also, I believe the problem manifests when VMs cross VMWare vSphere hosts, not when they're on a single host.

The following is known good:

  • RHEL 8.4

I don't know about 8.5. But it was fixed in 8.4. This is a another regression.

@chris93111
Copy link
Author

Hi, we did some tests again with RHEL8.3 and and we see a really strange problem with vsphere.

cluster with RHEL8.3 and vm hardware 11 work, the bad checksum is present but they are no impact in cluster

cluster in RHEL8.3 and vm hardware in 15 to 19 not work, problème with resolution dns

(tested with rke2 and k3s)

This probleme is know in openshift and fixed in uptream

https://bugzilla.redhat.com/show_bug.cgi?id=1987108

@stale
Copy link

stale bot commented Jan 31, 2023

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Jan 31, 2023
@stale stale bot closed this as completed Feb 14, 2023
@rbojan
Copy link

rbojan commented Dec 19, 2023

Same issue still happening with RHEL8.6. Pod communication entirely broken.

added the above fix to crontab to band-aid things so it survives reboot.

@reboot ethtool -K flannel.1 tx-checksum-ip-generic off

This fix should be posted in the readme to avoid headaches. It took a bit of digging to find this issue.

We encountered an issue where the flannel.1 interface was not accessible immediately after a reboot. To resolve this, we developed a bash script and established a systemd service as a workaround.

  1. sudo vi /usr/local/bin/flannel-fix.sh
#!/usr/bin/env bash

# Maximum wait time in seconds (e.g., 300 seconds = 5 minutes)
MAX_WAIT=300
WAIT_INTERVAL=10
ELAPSED_TIME=0

while ! ip link show flannel.1 &> /dev/null; do
  sleep $WAIT_INTERVAL
  ELAPSED_TIME=$((ELAPSED_TIME + WAIT_INTERVAL))
  if [ $ELAPSED_TIME -ge $MAX_WAIT ]; then
    echo "Timed out waiting for flannel.1 interface to become ready."
    exit 1
  fi
done

# Now that flannel.1 is up, run the ethtool command
ethtool -K flannel.1 tx-checksum-ip-generic off
  1. sudo chmod +x /usr/local/bin/flannel-fix.sh

  2. sudo vi /etc/systemd/system/flannel-fix.service

IMPORTANT: Change k3s.service to k3s-agent.service on agent nodes

[Unit]
Description=Run command to fix flannel (vxlan + UDP) once after reboot and K3s is up
Requires=k3s.service
After=k3s.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/flannel-fix.sh

[Install]
WantedBy=default.target
  1. Execute following commands one by one
sudo systemctl daemon-reload
sudo systemctl enable flannel-fix.service
sudo systemctl start flannel-fix.service

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants