Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ubuntu 18.04 iptables errors #116

Closed
runningman84 opened this issue Mar 2, 2019 · 19 comments
Closed

ubuntu 18.04 iptables errors #116

runningman84 opened this issue Mar 2, 2019 · 19 comments

Comments

@runningman84
Copy link

I have tried to build a two node ubuntu 18.04 setup. The server is running on virtnuc1, the agent is running on virtnuc2:

kubectl get nodes                                                                                                                                                                                       
NAME       STATUS     ROLES    AGE   VERSION
virtnuc1   Ready      <none>   51m   v1.13.3-k3s.6
virtnuc2   NotReady   <none>   46m   v1.13.3-k3s.6

Describe the bug
There are a lot of iptalbes errors on the virtnuc2 node:

Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.680087    1682 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.712342    1682 flannel.go:75] Wrote subnet file to /run/flannel/subnet.env
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.712360    1682 flannel.go:79] Running backend.
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.712366    1682 vxlan_network.go:60] watching for new subnet leases
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.785155    1682 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.785200    1682 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.785560    1682 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.785682    1682 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.786118    1682 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.786382    1682 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.787100    1682 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.787297    1682 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.788943    1682 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.789247    1682 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.791257    1682 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.792596    1682 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.912990    1682 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.914002    1682 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.978980    1682 proxier.go:232] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-HOST'
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.979889    1682 proxier.go:238] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-CONTAINER
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.996596    1682 proxier.go:246] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-HOST'
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.997532    1682 proxier.go:252] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-CONTAINE
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.998147    1682 proxier.go:259] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-NON-LOCA
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: E0302 16:17:24.000540    1682 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-S
Mar 02 16:17:24 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: E0302 16:17:24.001206    1682 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-S
Mar 02 16:17:24 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: E0302 16:17:24.001811    1682 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-P
Mar 02 16:17:24 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: E0302 16:17:24.002428    1682 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-F
Mar 02 16:17:24 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.347379    1682 server.go:464] Version: v1.13.3-k3s.6
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.354142    1682 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.354252    1682 conntrack.go:52] Setting nf_conntrack_max to 131072
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.368281    1682 conntrack.go:83] Setting conntrack hashsize to 32768
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.347379    1682 server.go:464] Version: v1.13.3-k3s.6
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.354142    1682 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.354252    1682 conntrack.go:52] Setting nf_conntrack_max to 131072
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.368281    1682 conntrack.go:83] Setting conntrack hashsize to 32768
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377299    1682 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377495    1682 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377782    1682 config.go:102] Starting endpoints config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377795    1682 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377813    1682 config.go:202] Starting service config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377817    1682 controller_utils.go:1027] Waiting for caches to sync for service config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.477919    1682 controller_utils.go:1034] Caches are synced for endpoints config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.477922    1682 controller_utils.go:1034] Caches are synced for service config controller
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.736050    1682 remote_runtime.go:173] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[stri
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.736130    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.736154    1682 kubelet_pods.go:1005] Error listing containers: &status.statusError{Code:4, Message:"context deadline exceeded", Details:[]*any.Any(nil)}
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.736173    1682 kubelet.go:1903] Failed cleaning pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.788788    1682 remote_runtime.go:173] ListPodSandbox with filter nil from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline ex
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.789058    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.789074    1682 generic.go:203] GenericPLEG: Unable to retrieve pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:19:32 virtnuc2 k3s[1682]: E0302 16:19:32.902644    1682 remote_runtime.go:173] ListPodSandbox with filter &PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{},} from runtime service faile
Mar 02 16:19:32 virtnuc2 k3s[1682]: E0302 16:19:32.903204    1682 eviction_manager.go:243] eviction manager: failed to get summary stats: failed to list pod stats: failed to list all pod sandboxes: rpc error: c
Mar 02 16:20:23 virtnuc2 k3s[1682]: E0302 16:20:23.151006    1682 remote_runtime.go:173] ListPodSandbox with filter nil from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline ex
Mar 02 16:20:23 virtnuc2 k3s[1682]: E0302 16:20:23.151068    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:20:23 virtnuc2 k3s[1682]: E0302 16:20:23.151082    1682 kubelet.go:1201] Container garbage collection failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:20:33 virtnuc2 k3s[1682]: I0302 16:20:33.093834    1682 setters.go:421] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-02 16:20:33.09380591 +0000 UTC m=+192.030929943 LastTr
Mar 02 16:21:23 virtnuc2 k3s[1682]: E0302 16:21:23.789257    1682 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:21:23 virtnuc2 k3s[1682]: E0302 16:21:23.789330    1682 kuberuntime_sandbox.go:58] CreatePodSandbox for pod "tiller-deploy-6cf89f5895-6x2f2_kube-system(85c66c3a-3d02-11e9-b9c5-080027905085)" failed: r
Mar 02 16:21:23 virtnuc2 k3s[1682]: E0302 16:21:23.789345    1682 kuberuntime_manager.go:677] createPodSandbox for pod "tiller-deploy-6cf89f5895-6x2f2_kube-system(85c66c3a-3d02-11e9-b9c5-080027905085)" failed: 
Mar 02 16:21:23 virtnuc2 k3s[1682]: E0302 16:21:23.789435    1682 pod_workers.go:190] Error syncing pod 85c66c3a-3d02-11e9-b9c5-080027905085 ("tiller-deploy-6cf89f5895-6x2f2_kube-system(85c66c3a-3d02-11e9-b9c5-
Mar 02 16:22:33 virtnuc2 k3s[1682]: I0302 16:22:33.241102    1682 setters.go:421] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-02 16:22:33.241062781 +0000 UTC m=+312.178186815 LastT
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.735964    1682 remote_runtime.go:173] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[stri
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.736051    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.736062    1682 kubelet_pods.go:1021] Error listing containers: &status.statusError{Code:4, Message:"context deadline exceeded", Details:[]*any.Any(nil)}
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.736077    1682 kubelet.go:1903] Failed cleaning pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:23:24 virtnuc2 k3s[1682]: I0302 16:23:24.736094    1682 kubelet.go:1752] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m58.946925001s ago; threshold is 3m0s]
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.792416    1682 remote_runtime.go:173] ListPodSandbox with filter nil from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline ex
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.792466    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.792477    1682 generic.go:203] GenericPLEG: Unable to retrieve pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded

Expected behavior
The second node should run like the first node without iptables errors. I am not sure if the other errors are related....

@superseb
Copy link
Contributor

superseb commented Mar 2, 2019

What's the difference between the two? I would start with uname -r and lsmod differences.

@runningman84
Copy link
Author

I can post some output later. But I guess there is not much difference. The base system was installed at the same time. The first node is installed using this command:

curl -sfL https://get.k3s.io | sh -

The second node was installed manually according to the docs.

@runningman84
Copy link
Author

ok here is some debug output

virtnuc1

root@virtnuc1:~# uname -a
Linux virtnuc1 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
root@virtnuc1:~# lsmod 
Module                  Size  Used by
nf_conntrack_netlink    40960  0
ipt_REJECT             16384  1
nf_reject_ipv4         16384  1 ipt_REJECT
xt_conntrack           16384  4
ip_set                 40960  0
nfnetlink              16384  2 nf_conntrack_netlink,ip_set
xt_multiport           16384  1
xt_nat                 16384  10
xt_tcpudp              16384  33
xt_addrtype            16384  3
xt_comment             16384  40
veth                   16384  0
ipt_MASQUERADE         16384  6
nf_nat_masquerade_ipv4    16384  1 ipt_MASQUERADE
iptable_filter         16384  1
vxlan                  57344  0
ip6_udp_tunnel         16384  1 vxlan
udp_tunnel             16384  1 vxlan
ip6table_nat           16384  0
nf_conntrack_ipv6      20480  1
nf_defrag_ipv6         36864  1 nf_conntrack_ipv6
nf_nat_ipv6            16384  1 ip6table_nat
ip6_tables             28672  1 ip6table_nat
xt_mark                16384  7
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
iptable_nat            16384  2
ip_vs                 147456  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      16384  21
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
nf_nat_ipv4            16384  1 iptable_nat
nf_nat                 32768  4 nf_nat_masquerade_ipv4,nf_nat_ipv6,nf_nat_ipv4,xt_nat
nf_conntrack          131072  11 xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv6,nf_conntrack_ipv4,nf_nat,nf_nat_ipv6,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_netlink,ip_vs
aufs                  241664  0
overlay                77824  7
br_netfilter           24576  0
bridge                151552  1 br_netfilter
stp                    16384  1 bridge
llc                    16384  2 bridge,stp
vboxvideo              36864  1
ttm                   106496  1 vboxvideo
drm_kms_helper        172032  1 vboxvideo
joydev                 24576  0
input_leds             16384  0
drm                   401408  4 drm_kms_helper,vboxvideo,ttm
serio_raw              16384  0
snd_intel8x0           40960  0
snd_ac97_codec        131072  1 snd_intel8x0
ac97_bus               16384  1 snd_ac97_codec
fb_sys_fops            16384  1 drm_kms_helper
snd_pcm                98304  2 snd_intel8x0,snd_ac97_codec
syscopyarea            16384  1 drm_kms_helper
snd_timer              32768  1 snd_pcm
snd                    81920  4 snd_intel8x0,snd_timer,snd_ac97_codec,snd_pcm
soundcore              16384  1 snd
vboxguest             303104  0
mac_hid                16384  0
sysfillrect            16384  1 drm_kms_helper
sysimgblt              16384  1 drm_kms_helper
sch_fq_codel           20480  2
ib_iser                49152  0
rdma_cm                61440  1 ib_iser
iw_cm                  45056  1 rdma_cm
ib_cm                  53248  1 rdma_cm
ib_core               225280  4 rdma_cm,iw_cm,ib_iser,ib_cm
iscsi_tcp              20480  0
libiscsi_tcp           20480  1 iscsi_tcp
libiscsi               53248  3 libiscsi_tcp,iscsi_tcp,ib_iser
scsi_transport_iscsi    98304  3 iscsi_tcp,ib_iser,libiscsi
ip_tables              28672  2 iptable_filter,iptable_nat
x_tables               40960  12 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_MASQUERADE,xt_addrtype,xt_nat,xt_comment,ip6_tables,ipt_REJECT,ip_tables,xt_mark
autofs4                40960  2
btrfs                1122304  0
zstd_compress         163840  1 btrfs
raid10                 53248  0
raid456               143360  0
async_raid6_recov      20480  1 raid456
async_memcpy           16384  2 raid456,async_raid6_recov
async_pq               16384  2 raid456,async_raid6_recov
async_xor              16384  3 async_pq,raid456,async_raid6_recov
async_tx               16384  5 async_pq,async_memcpy,async_xor,raid456,async_raid6_recov
xor                    24576  2 async_xor,btrfs
raid6_pq              114688  4 async_pq,btrfs,raid456,async_raid6_recov
libcrc32c              16384  4 nf_conntrack,nf_nat,raid456,ip_vs
raid1                  40960  0
raid0                  20480  0
multipath              16384  0
linear                 16384  0
hid_generic            16384  0
usbhid                 49152  0
hid                   118784  2 usbhid,hid_generic
crct10dif_pclmul       16384  0
crc32_pclmul           16384  0
ghash_clmulni_intel    16384  0
pcbc                   16384  0
aesni_intel           188416  0
ahci                   40960  2
psmouse               147456  0
aes_x86_64             20480  1 aesni_intel
crypto_simd            16384  1 aesni_intel
glue_helper            16384  1 aesni_intel
cryptd                 24576  3 crypto_simd,ghash_clmulni_intel,aesni_intel
libahci                32768  1 ahci
i2c_piix4              24576  0
e1000                 143360  0
pata_acpi              16384  0
video                  45056  0
root@virtnuc1:~# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  anywhere             anywhere            
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
ACCEPT     all  --  virtnuc1/16          anywhere            
ACCEPT     all  --  anywhere             virtnuc1/16         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  anywhere             anywhere            
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */

Chain KUBE-EXTERNAL-SERVICES (1 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  virtnuc1/16          anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             virtnuc1/16          /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-SERVICES (1 references)
target     prot opt source               destination         
REJECT     tcp  --  anywhere             10.43.249.203        /* kube-system/tiller-deploy:tiller has no endpoints */ tcp dpt:44134 reject-with icmp-port-unreachable
root@virtnuc1:~# cat /etc/issue
Ubuntu 18.04.2 LTS \n \l

virtnuc2

root@virtnuc2:~# uname -r 
4.15.0-45-generic
root@virtnuc2:~# lsmod 
Module                  Size  Used by
aufs                  241664  0
overlay                77824  0
snd_intel8x0           40960  0
snd_ac97_codec        131072  1 snd_intel8x0
vboxvideo              36864  1
input_leds             16384  0
serio_raw              16384  0
joydev                 24576  0
ac97_bus               16384  1 snd_ac97_codec
ttm                   106496  1 vboxvideo
drm_kms_helper        172032  1 vboxvideo
snd_pcm                98304  2 snd_intel8x0,snd_ac97_codec
snd_timer              32768  1 snd_pcm
snd                    81920  4 snd_intel8x0,snd_timer,snd_ac97_codec,snd_pcm
vboxguest             303104  0
soundcore              16384  1 snd
drm                   401408  4 drm_kms_helper,vboxvideo,ttm
fb_sys_fops            16384  1 drm_kms_helper
syscopyarea            16384  1 drm_kms_helper
sysfillrect            16384  1 drm_kms_helper
sysimgblt              16384  1 drm_kms_helper
mac_hid                16384  0
sch_fq_codel           20480  2
ib_iser                49152  0
rdma_cm                61440  1 ib_iser
iw_cm                  45056  1 rdma_cm
ib_cm                  53248  1 rdma_cm
ib_core               225280  4 rdma_cm,iw_cm,ib_iser,ib_cm
iscsi_tcp              20480  0
libiscsi_tcp           20480  1 iscsi_tcp
libiscsi               53248  3 libiscsi_tcp,iscsi_tcp,ib_iser
scsi_transport_iscsi    98304  3 iscsi_tcp,ib_iser,libiscsi
ip_tables              28672  0
x_tables               40960  1 ip_tables
autofs4                40960  2
btrfs                1122304  0
zstd_compress         163840  1 btrfs
raid10                 53248  0
raid456               143360  0
async_raid6_recov      20480  1 raid456
async_memcpy           16384  2 raid456,async_raid6_recov
async_pq               16384  2 raid456,async_raid6_recov
async_xor              16384  3 async_pq,raid456,async_raid6_recov
async_tx               16384  5 async_pq,async_memcpy,async_xor,raid456,async_raid6_recov
xor                    24576  2 async_xor,btrfs
raid6_pq              114688  4 async_pq,btrfs,raid456,async_raid6_recov
libcrc32c              16384  1 raid456
raid1                  40960  0
raid0                  20480  0
multipath              16384  0
linear                 16384  0
hid_generic            16384  0
usbhid                 49152  0
hid                   118784  2 usbhid,hid_generic
crct10dif_pclmul       16384  0
crc32_pclmul           16384  0
ghash_clmulni_intel    16384  0
pcbc                   16384  0
aesni_intel           188416  0
aes_x86_64             20480  1 aesni_intel
crypto_simd            16384  1 aesni_intel
glue_helper            16384  1 aesni_intel
cryptd                 24576  3 crypto_simd,ghash_clmulni_intel,aesni_intel
psmouse               147456  0
i2c_piix4              24576  0
ahci                   40960  2
libahci                32768  1 ahci
e1000                 143360  0
pata_acpi              16384  0
video                  45056  0
root@virtnuc2:~# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
root@virtnuc2:~# cat /etc/issue
Ubuntu 18.04.2 LTS \n \l

@runningman84
Copy link
Author

I think there is a problem with the k3s agent mode. If I run k3s in server mode on virtnuc2 everything looks just fine:

root@virtnuc2:~# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
virtnuc2   Ready    <none>   10m   v1.13.3-k3s.6
root@virtnuc2:~# kubectl get pods
No resources found.
root@virtnuc2:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS      RESTARTS   AGE
kube-system   coredns-7748f7f6df-75dv6         1/1     Running     0          10m
kube-system   helm-install-traefik-gpz9d       0/1     Completed   0          10m
kube-system   svclb-traefik-66bfb56f97-wvtgz   2/2     Running     0          9m41s
kube-system   traefik-dcd66ffd7-wgwnh          1/1     Running     0          9m41s

Is there a known issue running k3s agent using ubuntu 18.04?

@bjornramberg
Copy link

bjornramberg commented Mar 3, 2019

I'm seeing the same iptables issue on my node (just on the node)
(both master and node identical rpi 3 B+ running raspbian)

me@k8s2:~ $ uname -a
Linux k8s2 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l GNU/Linux
bear@k8s2:~ $ lsb_release -a
No LSB modules are available.
Distributor ID: Raspbian
Description:    Raspbian GNU/Linux 9.8 (stretch)
Release:        9.8
Codename:       stretch
me@k8s2:~ $ 

@lentzi90
Copy link

lentzi90 commented Mar 7, 2019

I'm also seeing this on Fedora IoT 29 running on RPi 3B+. Both are fine running as servers but fail as agents.

$ uname -r
4.20.13-200.fc29.aarch64

The attached logs show one node first running as agent with quite a lot of problems, then after a reboot I start it as server instead and everything works well.
k3s.log

I could add that everything seems fine for me at first. The agent joins the cluster successfully and shows up as ready. But when I try to add a pod (scheduled on the agent node) it fails. The pod stays in ContainerCreating and the node becomes NotReady:

Warning  ContainerGCFailed        21s                    kubelet, fili     rpc error: code = DeadlineExceeded desc = context deadline exceeded
Normal   NodeNotReady             10s (x2 over 5m22s)    kubelet, fili     Node fili status is now: NodeNotReady

@runningman84
Copy link
Author

k3s v 0.2 suffers from the same problem.

@erikwilson
Copy link
Contributor

It looks like containerd never starts up @lentzi90, is it possible to run with --debug and share containerd logs?

@lentzi90
Copy link

I'm attaching debug logs of the k3s-agent.
k3s-agent.log
It seems like you are right about containerd not starting. I cannot find any logs from it. They should be here right?

$ sudo ls /var/lib/rancher/k3s/agent/containerd
bin                               io.containerd.metadata.v1.bolt  io.containerd.snapshotter.v1.native     tmpmounts
io.containerd.content.v1.content  io.containerd.runtime.v1.linux  io.containerd.snapshotter.v1.overlayfs
io.containerd.grpc.v1.cri         io.containerd.runtime.v2.task   lib

@runningman84
Copy link
Author

It looks like even commands like
k3s crictl ps
do not work if you run k3s in agent mode

@joakimr-axis
Copy link
Contributor

I see this on RPI/Raspbian devices too, on the agent side. What is the latest update on the subject?

@erikwilson
Copy link
Contributor

Please check the output of iptables --version @joakimr-axis. nf_tables will cause an issue for newer versions of iptables like v1.8, should be legacy mode or an older version. Also see #703

@joakimr-axis
Copy link
Contributor

issue for newer versions of iptables like v1.8

On my RPI/Raspbian devices:

# iptables --version
iptables v1.8.2 (nf_tables)
#

Just like you expected! Thanks for the info.

@johansmitsnl
Copy link

I have a debian Buster installation and I see no iptable rules at all.
It has version 1.8.2 and it seems boken for the 0.9.1 release of k3s?

@psy-q
Copy link

psy-q commented May 6, 2020

For me it works correctly if I follow erikwilson's advice and switch into legacy:

update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy

It's also documented over at the Kubernetes docs.

@stefanlasiewski
Copy link

stefanlasiewski commented Oct 7, 2020

@psy-q Ubuntu 18.04 doesn't provide those alternatives for iptables. If you are using Ubuntu 18.04, how did you get to the point to have /usr/sbin/iptables-legacy, etc.?

@psy-q
Copy link

psy-q commented Oct 7, 2020

I'm running Debian, I guess Ubuntu 18.04 doesn't have a new enough version of iptables to also have the legacy mode available (?). According to the Kubernetes docs, Ubuntu 19.04 should have it. But newer versions of Kubernetes (after 1.17) shouldn't require this legacy mode since they implement the newer nftables APIs directly AFAIK.

@brandond
Copy link
Member

brandond commented Dec 4, 2020

Closing due to age. Should be resolved on newer releases.

@brandond brandond closed this as completed Dec 4, 2020
@GavinB-hpe
Copy link

I'm hitting this in Jan 2021 with a new install of k3s "stable" on a Rpi running "buster". Then tried "latest" - same issue.

 pi@raspberrypi:~ $ sudo k3s  check-config

Verifying binaries in /var/lib/rancher/k3s/data/c8ca2ef57aa8ef0951f3d6c5aafbe2354ef69054c8011f5859283a9d282e4b75/bin:
- sha256sum: good
- links: good

System:
- /usr/sbin iptables v1.8.2 (nf_tables): should be older than v1.8.0 or in legacy mode (fail)
- swap: should be disabled
- routes: ok

: 
: 

pi@raspberrypi:~ $ cat /etc/apt/sources.list | grep deb\ 
deb http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 5.4.79-v7l+ #1373 SMP Mon Nov 23 13:27:40 GMT 2020 armv7l GNU/Linux
pi@raspberrypi:~ $ 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests