-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kilo create bridge interafce only on one of the k8s nodes #129
Comments
Ack thanks a lot for reporting this. You provided tons of helpful details. Could you share some additional pieces of info:
|
|
@3rmack thanks for the quick reply. It's comforting that restarting doesn't resolve the issue, otherwise we might not have a convincing solution. The kubelet seems to complain that it can't find any configuration in the CNI directory. Indeed, it seems that the Kilo manifests for kubeadm install the CNI configuration in the wrong directory. They are using Can you try redeploying the Kilo DaemonSet with the corrected host path? If this fixes the issue, then please submit a PR if you can :) [0] https://github.com/squat/kilo/blob/main/manifests/kilo-kubeadm.yaml#L163 |
Actually, it is already deployed with the correct path |
Hmm in that case we'll need a bit more inspection. Can you please share the output of You wrote:
What do you mean? I didn't see any logs from Kilo that imply that. |
Took a look and CNI v0.3.1 should still work. We need to ensure that the CNI configuration file exists for the broken node and check if there are recent kubelet logs that continue to complain about |
As discussed in #129 (comment), the Kilo manifests for kubeadm install the CNI configuration in the wrong directory. They are using /etc/kubernetes/cni/net.d [0] when they should be using /etc/cni/net.d [1]. [0] https://github.com/squat/kilo/blob/main/manifests/kilo-kubeadm.yaml#L163 [1] https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
I want to say that if we compare logs from both kilo pods - logs from pod on "failed" node are shorter. There is an extra "update" event in logs on "ok" node.
Config files present on both nodes. Please check my initial post - outputs of those files are in the very end of it. |
Ack, yes i somehow missed them earlier. Thanks. |
Also worth mentioning that it could be hw/os/etc issue on cloud provider where I tested this. Ubuntu servers where created from from cloud provider's templates. |
I haven't been able to replicate this :/ this seems like it may be an issue with the specific environment, perhaps a container runtime issue |
I get the same err. Can i get any help? root@RJYF-P-337:/etc/cni/net.d#
root@RJYF-P-337:/etc/cni/net.d# cat 10-kilo.conflist |jq
{
"cniVersion": "0.3.1",
"name": "kilo",
"plugins": [
{
"bridge": "kube-bridge",
"forceAddress": true,
"ipam": {
"ranges": [
[
{
"subnet": "10.1.0.0/24"
}
]
],
"type": "host-local"
},
"isDefaultGateway": true,
"mtu": 1420,
"name": "kubernetes",
"type": "bridge"
},
{
"capabilities": {
"portMappings": true
},
"snat": true,
"type": "portmap"
}
]
}
root@RJYF-P-337:/etc/cni/net.d#
root@RJYF-P-337:/etc/cni/net.d#
root@RJYF-P-337:/etc/cni/net.d# kubectl logs -f -n kube-system kilo-lcjvb kilo
{"caller":"main.go:277","msg":"Starting Kilo network mesh 'a1af9790ea541c683d528d5a1d23075528d682d4'.","ts":"2022-03-25T06:58:31.331505641Z"}
{"caller":"cni.go:61","component":"kilo","err":"failed to read IPAM config from CNI config list file: no IP ranges specified","level":"warn","msg":"failed to get CIDR from CNI file; overwriting it","ts":"2022-03-25T06:58:31.432995767Z"}
{"caller":"cni.go:69","component":"kilo","level":"info","msg":"CIDR in CNI file is empty","ts":"2022-03-25T06:58:31.433046208Z"}
{"CIDR":"10.1.0.0/24","caller":"cni.go:74","component":"kilo","level":"info","msg":"setting CIDR in CNI file","ts":"2022-03-25T06:58:31.43305818Z"}
{"caller":"mesh.go:375","component":"kilo","level":"info","msg":"overriding endpoint","new endpoint":"172.20.60.28:51820","node":"rjyf-p-337","old endpoint":"","ts":"2022-03-25T06:58:31.541709926Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T06:58:31.555442689Z"}
{"caller":"mesh.go:309","component":"kilo","event":"add","level":"info","node":{"Endpoint":{},"Key":[27,123,34,254,51,164,151,222,139,112,14,118,233,72,232,252,215,192,141,112,145,225,11,124,100,1,92,187,19,84,89,108],"NoInternalIP":false,"InternalIP":{"IP":"10.2.0.1","Mask":"/////w=="},"LastSeen":1648191504,"Leader":false,"Location":"","Name":"lc","PersistentKeepalive":0,"Subnet":{"IP":"10.1.3.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"full"},"ts":"2022-03-25T06:58:31.555600099Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T06:58:31.556817226Z"}
{"caller":"mesh.go:309","component":"kilo","event":"add","level":"info","node":{"Endpoint":null,"Key":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"gcp","Name":"rjyf-p-335","PersistentKeepalive":0,"Subnet":{"IP":"10.1.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2022-03-25T06:58:31.556912803Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T06:58:31.557776266Z"}
{"caller":"mesh.go:309","component":"kilo","event":"add","level":"info","node":{"Endpoint":{},"Key":[199,66,125,140,234,59,65,207,73,92,126,95,247,144,33,194,75,219,98,104,213,187,67,24,129,193,0,124,228,8,160,31],"NoInternalIP":false,"InternalIP":{"IP":"172.20.60.31","Mask":"///8AA=="},"LastSeen":1648191502,"Leader":false,"Location":"gcp","Name":"rjyf-p-336","PersistentKeepalive":0,"Subnet":{"IP":"10.1.2.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"full"},"ts":"2022-03-25T06:58:31.557862063Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T06:58:31.55877808Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T06:59:01.543738566Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T06:59:31.545704256Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:00:01.547772771Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:00:31.550088195Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:01:01.551853854Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:01:31.554154067Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:02:01.556278704Z"}
{"caller":"mesh.go:309","component":"kilo","event":"update","level":"info","node":{"Endpoint":null,"Key":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"gcp","Name":"rjyf-p-335","PersistentKeepalive":0,"Subnet":{"IP":"10.1.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2022-03-25T07:02:14.831024851Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:02:14.832378096Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:02:31.558733749Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:03:01.560622049Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:03:31.563116772Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:04:01.565075605Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:04:31.568063262Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:05:01.57004051Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:05:31.571529246Z"}
{"caller":"mesh.go:482","component":"kilo","error":"file does not exist","level":"error","ts":"2022-03-25T07:06:01.573270241Z"}
^C
root@RJYF-P-337:/etc/cni/net.d#
It is my yaml : apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kilo
namespace: kube-system
labels:
app.kubernetes.io/name: kilo
app.kubernetes.io/part-of: kilo
spec:
selector:
matchLabels:
app.kubernetes.io/name: kilo
app.kubernetes.io/part-of: kilo
template:
metadata:
labels:
app.kubernetes.io/name: kilo
app.kubernetes.io/part-of: kilo
spec:
serviceAccountName: kilo
hostNetwork: true
containers:
- name: boringtun
image: leonnicolas/boringtun
args:
- --disable-drop-privileges=true
- --foreground
- kilo0
securityContext:
privileged: true
volumeMounts:
- name: wireguard
mountPath: /var/run/wireguard
readOnly: false
- name: kilo
image: squat/kilo
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME)
- --create-interface=false
- --interface=kilo0
- --mesh-granularity=full
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 1107
name: metrics
securityContext:
privileged: true
volumeMounts:
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kilo-dir
mountPath: /var/lib/kilo
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
readOnly: false
initContainers:
- name: install-cni
image: squat/kilo
command:
- /bin/sh
- -c
- set -e -x;
cp /opt/cni/bin/* /host/opt/cni/bin/;
TMP_CONF="$CNI_CONF_NAME".tmp;
echo "$CNI_NETWORK_CONFIG" > $TMP_CONF;
rm -f /host/etc/cni/net.d/*;
mv $TMP_CONF /host/etc/cni/net.d/$CNI_CONF_NAME
env:
- name: CNI_CONF_NAME
value: 10-kilo.conflist
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kilo
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: kilo-dir
hostPath:
path: /var/lib/kilo
- name: kubeconfig
configMap:
name: kube-proxy
items:
- key: kubeconfig.conf
path: kubeconfig
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: wireguard
hostPath:
path: /var/run/wireguard kubernetes version: v1.20.9 |
@hhstu this seems like the kilo0 device is not available / doesn't exist. Are there any logs from the boringtun container? |
just .. @squat root@RJYF-P-337:/etc/cni/net.d# kubectl logs -f -n kube-system kilo-tgmnz boringtun
2022-03-25T07:23:39.490195Z INFO boringtun_cli: BoringTun started successfully
at boringtun-cli/src/main.rs:178
|
Hmmm can you please show a list of the devices available in the erroring Kilo Pod? |
root@RJYF-P-337:/etc/cni/net.d# kubectl exec -it -n kube-system kilo-tgmnz -c kilo ip a
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:95:d4:7c brd ff:ff:ff:ff:ff:ff
inet 172.20.60.28/22 brd 172.20.63.255 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe95:d47c/64 scope link
valid_lft forever preferred_lft forever
3: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
link/ether ba:35:6f:9a:01:f1 brd ff:ff:ff:ff:ff:ff
inet 10.2.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.2.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
4: kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue state UP qlen 1000
link/ether 82:7f:e9:a4:2f:9f brd ff:ff:ff:ff:ff:ff
inet 10.1.0.1/24 brd 10.1.0.255 scope global kube-bridge
valid_lft forever preferred_lft forever
inet6 fe80::a0ae:11ff:fee8:a477/64 scope link
valid_lft forever preferred_lft forever
17: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
31: kilo0: <POINTOPOINT,MULTICAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 500
link/[65534]
32: vethd3527ba5@kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP
link/ether 82:7f:e9:a4:2f:9f brd ff:ff:ff:ff:ff:ff
inet6 fe80::807f:e9ff:fea4:2f9f/64 scope link
valid_lft forever preferred_lft forever
root@RJYF-P-337:/etc/cni/net.d# |
Thanks @hhstu so there is indeed a kilo0 interface available. Some things that come to mind: What differences are there between this node and the one that is working? Different OS? OS version? Hardware? One uses boringtun the other doesn't? Etc knowing the differences may help determine why this works on one machine but not the other |
This is a new cluster of kubedm. All pods of kilo are not work! I have never do it well @squat It is my kubeadm-config apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.20.60.28
bindPort: 6443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.9
controlPlaneEndpoint: apiserver.cluster.local:6443
imageRepository: 172.20.60.28:5000/ccs
networking:
dnsDomain: cluster.local
podSubnet: 10.1.0.0/16
serviceSubnet: 10.2.0.0/16
apiServer:
certSANs:
- 127.0.0.1
- apiserver.cluster.local
- 172.20.60.28
- 172.20.60.31
- 172.20.60.32
- 10.103.97.2
extraArgs:
feature-gates: TTLAfterFinished=true,RemoveSelfLink=false
max-mutating-requests-inflight: "4000"
max-requests-inflight: "8000"
default-unreachable-toleration-seconds: "2"
extraVolumes:
- name: localtime
hostPath: /etc/localtime
mountPath: /etc/localtime
readOnly: true
pathType: File
controllerManager:
extraArgs:
bind-address: 0.0.0.0
secure-port: "10257"
port: "10252"
kube-api-burst: "100"
kube-api-qps: "50"
feature-gates: TTLAfterFinished=true,RemoveSelfLink=false
experimental-cluster-signing-duration: 876000h
extraVolumes:
- hostPath: /etc/localtime
mountPath: /etc/localtime
name: localtime
readOnly: true
pathType: File
scheduler:
extraArgs:
bind-address: 0.0.0.0
kube-api-burst: "100"
kube-api-qps: "50"
port: "10251"
secure-port: "10259"
feature-gates: TTLAfterFinished=true,RemoveSelfLink=false
extraVolumes:
- hostPath: /etc/localtime
mountPath: /etc/localtime
name: localtime
readOnly: true
pathType: File
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
metricsBindAddress: 0.0.0.0
bindAddress: 0.0.0.0
ipvs:
syncPeriod: 30s
minSyncPeriod: 5s
scheduler: rr
excludeCIDRs:
- 10.103.97.2/32
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
kubeAPIQPS: 40
kubeAPIBurst: 50
imageMinimumGCAge: 48h
imageGCHighThresholdPercent: 85
evictionHard:
imagefs.available: 5%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
|
Also, @hhstu does this work if you pin Kilo to 0.3.1? I wonder if this might be due to the switch to using a different WireGuard client library. |
After change to 0.3.1 root@RJYF-P-337:~# kubectl logs -f -n kube-system kilo-8zvcp kilo
{"caller":"main.go:221","msg":"Starting Kilo network mesh '0.3.1'.","ts":"2022-03-25T08:18:42.676269936Z"}
{"caller":"cni.go:60","component":"kilo","err":"failed to read IPAM config from CNI config list file: no IP ranges specified","level":"warn","msg":"failed to get CIDR from CNI file; overwriting it","ts":"2022-03-25T08:18:42.777749293Z"}
{"caller":"cni.go:68","component":"kilo","level":"info","msg":"CIDR in CNI file is empty","ts":"2022-03-25T08:18:42.777855805Z"}
{"CIDR":"10.1.2.0/24","caller":"cni.go:73","component":"kilo","level":"info","msg":"setting CIDR in CNI file","ts":"2022-03-25T08:18:42.777881579Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:18:42.903321326Z"}
{"caller":"mesh.go:297","component":"kilo","event":"add","level":"info","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"gcp","Name":"rjyf-p-337","PersistentKeepalive":0,"Subnet":{"IP":"10.1.0.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2022-03-25T08:18:42.903400228Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:18:42.904926106Z"}
{"caller":"mesh.go:297","component":"kilo","event":"add","level":"info","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"lc","PersistentKeepalive":0,"Subnet":{"IP":"10.1.3.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2022-03-25T08:18:42.904978993Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:18:42.907857284Z"}
{"caller":"mesh.go:297","component":"kilo","event":"add","level":"info","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"gcp","Name":"rjyf-p-335","PersistentKeepalive":0,"Subnet":{"IP":"10.1.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2022-03-25T08:18:42.908017109Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:18:42.909536967Z"}
{"caller":"mesh.go:297","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"172.20.60.32","Port":51820},"Key":"dHRvZ2VyaEZtMk5sczBtUTN2M0x6bFBLWWZ4R2dDQ0JobEtHZEZKVGFtaz0=","NoInternalIP":false,"InternalIP":{"IP":"172.20.60.32","Mask":"///8AA=="},"LastSeen":1648196324,"Leader":false,"Location":"gcp","Name":"rjyf-p-335","PersistentKeepalive":0,"Subnet":{"IP":"10.1.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"full"},"ts":"2022-03-25T08:18:44.152516296Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:18:44.154150488Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:19:12.888451019Z"}
{"caller":"mesh.go:297","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"172.10.97.10","Port":51820},"Key":"RzNzaS9qT2tsOTZMY0E1MjZVam8vTmZBalhDUjRRdDhaQUZjdXhOVVdXdz0=","NoInternalIP":false,"InternalIP":{"IP":"10.2.0.1","Mask":"/////w=="},"LastSeen":1648196362,"Leader":false,"Location":"","Name":"lc","PersistentKeepalive":0,"Subnet":{"IP":"10.1.3.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"full"},"ts":"2022-03-25T08:19:22.203607866Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:19:22.20554322Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:19:42.891505001Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:20:12.894562702Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:20:42.897019171Z"}
{"caller":"mesh.go:459","component":"kilo","error":"failed to read the WireGuard dump output: Unable to access interface: Protocol not supported\n","level":"error","ts":"2022-03-25T08:21:12.900503875Z"} |
Thanks @hhstu those logs are a bit more helpful. |
Thinks @squat I will continue to check the problem |
Can I add some use cases such as kubeadm-userspace.yaml,kubeadm-flannel-userspace.yaml with a pr ? |
Hi @hhstu yes, I'd be very interested in taking a look at a PR for that 👍. I'm curious how/why it's different. Our E2E tests run on KinD, which uses kubeadm and we test userspace |
There is nothing different,just my mistake i forget set the wiregard volume of kilo container. I hoped add the use cases kubeadm-userspace kubeadm-flannel-userspace for next person. |
2 node k8s cluster created via kubeadm. Nodes are placed in different availability zones, have only dedicated external IP addresses (no private networks attached, etc.).
kubeadm init command (other params are default):
Initial k8s cluster status (no CNI):
Using kilo-kubeadm.yaml to install kilo. Here is the result:
Looks like configuring cni is stuck at some step. If we check kilo logs we can see that config on node1 is incomplete.
Wireguard looks ok on both nodes:
Let's check network interfaces and here is the problem. WG tunnel is ok, but bridge interface is not created on node1
The text was updated successfully, but these errors were encountered: