Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DaemonSet for ContainerNetworking DHCP CNI Plugin #3917

Closed
AJMansfield opened this issue Feb 17, 2023 · 18 comments
Closed

DaemonSet for ContainerNetworking DHCP CNI Plugin #3917

AJMansfield opened this issue Feb 17, 2023 · 18 comments
Assignees

Comments

@AJMansfield
Copy link
Contributor

The Problem

When setting up an RKE2 cluster to use Multus, it's not clear what the appropriate way is to set up and configure the DHCP daemon needed to allow the ContainerNetworking DHCP IPAM plugin to function.

Though there's ways of getting this daemon to run using DaemonSets or systemd units from other projects, the fact that the binary for this daemon (/opt/cni/dhcp) is distributed with RKE2 Multus suggests that it ought to also be run without needing additional steps much more complicated than those for running Multus in the first place.

The Solution I Want

I'd like to be able to add --cni=multus-dhcp as another RKE2 server argument, similar to specifying --cni=multus for getting Multus set up.

(Or equivalently from the server config.yaml:`

 # /etc/rancher/rke2/config.yaml
 cni:
 - multus
+- multus-dhcp
 - canal

On starting, the server would use this to place/install the appropriate manifest at /var/lib/rancher/rke2/server/manifests/rke2-multus-dhcp.yaml, and from that install a rke2-multus-dhcp Addon and create the rke2-multus-dhcp-ds DaemonSet to run the plugin daemon.

The Alternative Solutions I Already Have

The solution I'm using for now, is to add a copy of the k8snetworkingwg reference-deployment dhcp-daemonset.yaml to the server manifest folder myself. The DHCP plugin is perfectly functional when set up this way; the only plausible issue with it is the third-party dependency it introduces, something I will eventually need to resolve.

Before I found the DaemonSet method above, I also had it working using systemd to run the daemon directly in the host. The plugin authors have pre-made systemd unit files for this which work perfectly, and this is a superior solution in the sense that it only starts the daemon on demand (via systemd socket activation). But the daemon is already very lightweight, so the scalability disadvantage of this method led me to switch to using a DaemonSet.

@brandond
Copy link
Contributor

brandond commented Feb 17, 2023

The solution I'm using for now, is to add a copy of the k8snetworkingwg reference-deployment dhcp-daemonset.yaml to the server manifest folder myself.

That is probably the way I would recommend doing it. I will defer to @manuelbuil @rbrtbnfgl @thomasferrandiz on the best way to configure Multus, but at this point I do not believe we are planning on allowing configuration of multus CNI plugins via the --cni field, or packaging any additional CNIs.

I do know that many plugins are already built in to multus; you might check out the docs at https://docs.rke2.io/install/network_options#using-multus-with-the-containernetworking-plugins and see if you can get the DHCP plugin working that way.

@AJMansfield
Copy link
Contributor Author

I do know that many plugins are already built in to multus

The ContainerNetworking IPAM DHCP plugin is one of the plugins that's built into multus, and which is already included with it when you install it in RKE2. You can already successfully invoke the client side of that plugin from a stock RKE2 + multus install with a configuration and manifest like this:

# /etc/rancher/rke2/config.yaml
---
cni:
- multus
- canal
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "macvlan",
      "mode": "bridge",
      "master": "eth0",
      "ipam": { "type": "dhcp" }
    }'
---
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
  containers:
  - image: busybox
    name: example-container
    command: ["sleep", "infinity"]

Attempting to run that pod, the plugin is indeed found and invoked -- it just fails with a DHCP-plugin-specific error message when it can't find the /run/cni/dhcp.sock socket, which would normally be created by the plugin's daemon to allow pods to request for the daemon to acquire and start maintaining a DHCP lease on their behalf.

Further, that daemon's binary is also already included with RKE2 when you install multus: /opt/cni/dhcp.

The only thing that's missing is some mechanism to get that daemon to run alongside the daemon for multus itself.

Perhaps --cni=multus-dhcp is the wrong mechanism to configure this, but it seems silly for RKE2 to already package every other piece of that plugin, but omit the one additional DaemonSet needed for it to be functional.

@AJMansfield
Copy link
Contributor Author

Just to point out what I mean, here's a minimal alternative DaemonSet that can run the already-included-with-multus dhcp daemon binaries in a default busybox image:

# multus-dhcp.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: multus-dhcp-ds
  namespace: kube-system
  labels:
    tier: node
    app: multus-dhcp
spec:
  selector:
    matchLabels:
      tier: node
      app: multus-dhcp
  template:
    metadata:
      labels:
        tier: node
        app: multus-dhcp
    spec:
      hostNetwork: true
      containers:
      - name: dhcp
        image: busybox
        command: ["/opt/cni/dhcp", "daemon"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: binpath
          mountPath: /opt/cni
        - name: socketpath
          mountPath: /run/cni
      initContainers:
      - name: cleanup
        image: busybox
        command: ["rm", "-f", "/run/cni/dhcp.sock"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: socketpath
          mountPath: /host/run/cni
      volumes:
        - name: binpath
          hostPath:
            path: /opt/cni
        - name: socketpath
          hostPath:
            path: /run/cni

@brandond
Copy link
Contributor

It sounds like we might need to bundle a subchart for that, similar to what we did for whereabouts in rancher/rke2-charts#272

@Winor
Copy link

Winor commented Apr 26, 2023

I tried deploying the DaemonSet @AJMansfield provided, but now I'm getting this error when trying to start the pod, any ideas?

 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5191c1518d0e91d33846250d9e6711c8489a9b2de1a4b4860864f736a6189a86": plugin type="multus" name="multus-cni-network" failed (add): [home-automation/home-assistant-64fc8db8cf-ft7mp/dbe17c37-a6be-4b25-bc57-a14a946150ef:macvlan-dhcp-ha]: error adding container to network "macvlan-dhcp-ha": error calling DHCP.Allocate: failed to Statfs "/var/run/netns/cni-67245572-4e92-2911-4af6-2a522836004b": no such file or directory 

@Winor
Copy link

Winor commented Apr 26, 2023

Adding /var/run/netns mount like so fixed my problem:

...
  volumeMounts:
  - name: netnspath
    mountPath: /var/run/netns
    mountPropagation: HostToContainer
...
volumes:
  - name: netnspath
    hostPath:
      path: /run/netns

I still can't start the pod tho

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8eff1b971fc1e3e180394deb3cc2755252d003477979a6190af413247304d3d3": plugin type="multus" name="multus-cni-network" failed (add): [home-automation/home-assistant-64fc8db8cf-x2hj9/d7b65403-1429-4c85-9292-84009e5b2b28:macvlan-dhcp-ha]: error adding container to network "macvlan-dhcp-ha": error calling DHCP.Allocate: no more tries 

@brandond
Copy link
Contributor

Have you by any chance tried using whereabouts instead of the DHCP IPAM, or does that not meet your needs?

@Winor
Copy link

Winor commented Apr 26, 2023

I haven't tried yet, should work for my needs too, but I just prefer using my own DHCP server if possible.

DHCP daemon logs:

2023/04/26 12:06:06 cde681e54ee36a2f86442785163c2b76856a2433f91dd2164b74de447daf360b/macvlan-dhcp-ha/net1: acquiring lease
2023/04/26 12:06:06 Link "net1" down. Attempting to set up
2023/04/26 12:06:06 network is down
2023/04/26 12:06:06 retrying in 4.881018 seconds
2023/04/26 12:06:21 no DHCP packet received within 10s
2023/04/26 12:06:21 retrying in 8.329120 seconds

Seems like it can't reach the external DHCP server, the DHCP server logs indicates that no request as been made.

My NetworkAttachmentDefinition:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-dhcp-ha
  namespace: home-automation
spec:
  config: '{
            "cniVersion": "0.3.1",
            "name": "macvlan-dhcp-ha",
            "type": "macvlan",
            "mode": "bridge",
            "master": "enp11s0",
            "ipam": {
              "type": "host-local",
              "type": "dhcp"
          }
        }'

Pod metadata:

metadata:
  annotations:
    k8s.v1.cni.cncf.io/networks: '[{ "name" : "macvlan-dhcp-ha", "mac": "8a:8a:b5:ce:04:33" }]'

Host ifconfig

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:dd:c1:39 brd ff:ff:ff:ff:ff:ff
    altname enp11s0
    inet 10.20.0.4/16 brd 10.20.255.255 scope global dynamic noprefixroute ens192
       valid_lft 3953sec preferred_lft 3953sec
    inet6 2a06:c701:be5a:5f00::1cf3/128 scope global dynamic noprefixroute
       valid_lft 6667sec preferred_lft 3967sec
    inet6 fe80::afb3:ba79:52ad:22ce/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

Same result when trying to run the socket manual. (Instead of the DaemonSet)

@thomasferrandiz
Copy link
Contributor

@Winor why do you have both "type": "host-local" and "type": "dhcp" in your configuration?
There should be only 1 ipam plugin at a time.

@AJMansfield
Copy link
Contributor Author

Have you by any chance tried using whereabouts instead of the DHCP IPAM, or does that not meet your needs?

In my case, "acquire an IP address from an external DHCP server" is actually essential for my application -- though, I ended up finding that I needed more control over the DHCP process itself (setting specific options, etc) so at this point I'm just using CNI to attach an unconfigured interface and having a udhcpc container handle the rest.

It'd still be good to have the plugin functional though, even if I no longer need it for my use case.

@brandond
Copy link
Contributor

Good catch @thomasferrandiz that is definitely an invalid configuration.

@Winor
Copy link

Winor commented Apr 26, 2023

@Winor why do you have both "type": "host-local" and "type": "dhcp" in your configuration? There should be only 1 ipam plugin at a time.

Yeah, I already noticed that and removed it, still same result.

@Winor
Copy link

Winor commented Apr 26, 2023

I did mange to get it to work with whereabouts as @brandond suggested, though I'm not sure if my configuration is right for what I wanted to achieve in the first place,

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-whereabouts-ha
  namespace: home-automation
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "macvlan",
      "master": "ens192",
      "mode": "bridge",
      "ipam": {
        "type": "whereabouts",
        "range": "10.20.3.1/16",
        "range_start": "10.20.3.11",
        "range_end": "10.20.3.254",
        "gateway": "10.20.0.1"
      }
    }'

With this configuration, the pod starts, and I can see the network interface show up inside the pod with an IP address from the configured range, but I still seem to have no connectivity with the host network, can't send or receive ping requests from external devices in the network.

@brandond
Copy link
Contributor

brandond commented Apr 26, 2023

have you tried tcpdumping on the bridge or master interface to see if the traffic shows up? If you have network policy rules in place, or ufw/firewalld enabled, that might also block the traffic, idk.

@thomasferrandiz
Copy link
Contributor

I can add an optional manifest in the rke2-multus chart to install the daemonset and run the dhcp daemon. That should make the dhcp plugin functional.

@Winor
Copy link

Winor commented Apr 27, 2023

@brandond Didn't try tcpdumping, but I have no firewall enabled, anyway I ended up using Ipvlan instead, it just works, so I'll keep that for now.

Once strange behaviour I noticed is that Multus won't attach network interfaces in the pods after boot. /etc/cni/net.d/00-multus.conf is not created and only after I redeploy rke2-multus-ds it will create the config file and start working. I'll try to investigate more and will open a new issue for that if needed, since it's unrelated to this issue.

@thomasferrandiz that could be great!

Thank you both for your help :)

@thomasferrandiz
Copy link
Contributor

thomasferrandiz commented Jan 5, 2024

Validation steps:

  • create file /var/lib/rancher/rke2/server/manifests/rke2-multus-config.yaml with the following content
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-multus
  namespace: kube-system
spec:
  valuesContent: |-
    manifests:
      dhcpDaemonSet: true
  • create file /etc/rancher/rke2/config.yaml with the following content to configure rke2 to use multus:
cni: multus,calico
  • start rke2
  • check that the pod kube-rke2-multus-dhcp was created and started properly
  • check that the file /run/cni/dhcp.sock was created on the worker node

@endawkins
Copy link

endawkins commented Jan 24, 2024

Validated on master with a4986a5 / version 1.29

Environment Details

Infrastructure

  • Cloud
  • Hosted

Node(s) CPU architecture, OS, and Version:

Linux ip-172-31-17-51 6.2.0-1017-aws #17~22.04.1-Ubuntu SMP Fri Nov 17 21:07:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Cluster Configuration:

1 server
1 agent

Config.yaml:

write-kubeconfig-mode: 644
token: test
node-external-ip: <EXTERNAL_IP>
cni: multus,calico

Additional files

***rke2-multus-config.yaml

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-multus
  namespace: kube-system
spec:
  valuesContent: |-
    manifests:
      dhcpDaemonSet: true

Testing Steps

  1. Copy config.yaml
$ sudo mkdir -p /etc/rancher/rke2 && sudo cp config.yaml /etc/rancher/rke2
  1. Install RKE2 (Do not enable or start rke2)
  2. Create file /var/lib/rancher/rke2/server/manifests/rke2-multus-config.yaml
  3. Enable and start rke2 (both server and agent)
  4. Verify that kube-rke2-multus-dhcp was created and started properly
  5. Verify /run/cni/dhcp.sock was created on agent node

Replication Results:

  • rke2 version used for replication:
$ rke2 -v
rke2 version v1.29.0+rke2r1 (4fd30c26c91dd3f2f623c5af00d1ebcfec8c2709)
go version go1.21.5 X:boringcrypto
$ kubectl get nodes,pods -A -o wide

NAME                    STATUS   ROLES                       AGE     VERSION          INTERNAL-IP     EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
node/ip-172-31-17-51    Ready    control-plane,etcd,master   3m38s   v1.29.0+rke2r1   172.31.17.51    <EXTERNAL_IP>   Ubuntu 22.04.3 LTS   6.2.0-1017-aws   containerd://1.7.11-k3s2
node/ip-172-31-18-234   Ready    <none>                      118s    v1.29.0+rke2r1   172.31.18.234   <none>          Ubuntu 22.04.3 LTS   6.2.0-1017-aws   containerd://1.7.11-k3s2

NAMESPACE         NAME                                                       READY   STATUS      RESTARTS        AGE     IP              NODE               NOMINATED NODE   READINESS GATES
calico-system     pod/calico-kube-controllers-f9d669cc7-khpqm                1/1     Running     0               2m45s   10.42.72.71     ip-172-31-17-51    <none>           <none>
calico-system     pod/calico-node-jrlrm                                      1/1     Running     0               2m46s   172.31.17.51    ip-172-31-17-51    <none>           <none>
calico-system     pod/calico-node-zgbzn                                      1/1     Running     0               118s    172.31.18.234   ip-172-31-18-234   <none>           <none>
calico-system     pod/calico-typha-564669dd4f-krn72                          1/1     Running     0               2m46s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/cloud-controller-manager-ip-172-31-17-51               1/1     Running     0               3m25s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/etcd-ip-172-31-17-51                                   1/1     Running     0               3m15s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-calico-crd-cf6zc                     0/1     Completed   0               3m21s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-calico-nwk9h                         0/1     Completed   2               3m21s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-coredns-5pwrr                        0/1     Completed   0               3m21s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-ingress-nginx-grfxw                  0/1     Completed   0               3m21s   10.42.72.65     ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-metrics-server-vfzgb                 0/1     Completed   0               3m21s   10.42.72.67     ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-multus-q88qb                         0/1     Completed   0               3m21s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-snapshot-controller-6lhsr            0/1     Completed   1               3m21s   10.42.72.72     ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-snapshot-controller-crd-dmbk8        0/1     Completed   0               3m21s   10.42.72.68     ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-snapshot-validation-webhook-hnxdq    0/1     Completed   0               3m21s   10.42.72.66     ip-172-31-17-51    <none>           <none>
kube-system       pod/kube-apiserver-ip-172-31-17-51                         1/1     Running     0               3m23s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/kube-controller-manager-ip-172-31-17-51                1/1     Running     0               3m27s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/kube-proxy-ip-172-31-17-51                             1/1     Running     0               3m23s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/kube-proxy-ip-172-31-18-234                            1/1     Running     0               117s    172.31.18.234   ip-172-31-18-234   <none>           <none>
kube-system       pod/kube-scheduler-ip-172-31-17-51                         1/1     Running     0               3m27s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-coredns-rke2-coredns-5b8c65d87f-nmc57             1/1     Running     0               3m10s   10.42.72.70     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-coredns-rke2-coredns-5b8c65d87f-vjtdd             1/1     Running     0               110s    10.42.2.193     ip-172-31-18-234   <none>           <none>
kube-system       pod/rke2-coredns-rke2-coredns-autoscaler-945fbd459-w5f5r   1/1     Running     0               3m10s   10.42.72.69     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-ingress-nginx-controller-486v4                    1/1     Running     0               65s     10.42.2.194     ip-172-31-18-234   <none>           <none>
kube-system       pod/rke2-ingress-nginx-controller-lpxjz                    1/1     Running     0               112s    10.42.72.76     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-metrics-server-544c8c66fc-qnzth                   1/1     Running     0               117s    10.42.72.74     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-multus-ds-hdv2t                                   1/1     Running     2 (2m49s ago)   3m11s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-multus-ds-z9lmj                                   1/1     Running     3 (75s ago)     118s    172.31.18.234   ip-172-31-18-234   <none>           <none>
kube-system       pod/rke2-snapshot-controller-59cc9cd8f4-f542j              1/1     Running     0               104s    10.42.72.78     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-snapshot-validation-webhook-54c5989b65-v2q4j      1/1     Running     0               115s    10.42.72.75     ip-172-31-17-51    <none>           <none>
tigera-operator   pod/tigera-operator-59d6c9b46-622w8                        1/1     Running     0               2m53s   172.31.17.51    ip-172-31-17-51    <none>           <none>

$ ls -l /run/cni/
ls: cannot access '/run/cni/': No such file or directory

Validation Results:

  • rke2 version used for validation:
$ rke2 -v
rke2 version v1.29.1-rc2+rke2r1 (a4986a5a5840f1a259e66c61553a9abd58ef9624)
go version go1.21.6 X:boringcrypto
$ kubectl get nodes,pods -A -o wide

NAME                    STATUS   ROLES                       AGE     VERSION          INTERNAL-IP     EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
node/ip-172-31-17-51    Ready    control-plane,etcd,master   4m23s   v1.29.1+rke2r1   172.31.17.51    <EXTERNAL_IP>   Ubuntu 22.04.3 LTS   6.2.0-1017-aws   containerd://1.7.11-k3s2
node/ip-172-31-18-234   Ready    <none>                      99s     v1.29.1+rke2r1   172.31.18.234   <none>          Ubuntu 22.04.3 LTS   6.2.0-1017-aws   containerd://1.7.11-k3s2

NAMESPACE         NAME                                                        READY   STATUS      RESTARTS        AGE     IP              NODE               NOMINATED NODE   READINESS GATES
calico-system     pod/calico-kube-controllers-6b6bb667c5-jwbdx                1/1     Running     0               3m29s   10.42.72.69     ip-172-31-17-51    <none>           <none>
calico-system     pod/calico-node-mgf69                                       1/1     Running     0               99s     172.31.18.234   ip-172-31-18-234   <none>           <none>
calico-system     pod/calico-node-pfqpw                                       1/1     Running     0               3m29s   172.31.17.51    ip-172-31-17-51    <none>           <none>
calico-system     pod/calico-typha-588cd74948-kk2bf                           1/1     Running     0               3m29s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/cloud-controller-manager-ip-172-31-17-51                1/1     Running     0               4m18s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/etcd-ip-172-31-17-51                                    1/1     Running     0               3m59s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-calico-2nv72                          0/1     Completed   2               4m4s    172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-calico-crd-bkvb4                      0/1     Completed   0               4m4s    172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-coredns-nv929                         0/1     Completed   0               4m4s    172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-ingress-nginx-f8zf2                   0/1     Completed   0               4m4s    10.42.72.67     ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-metrics-server-944j2                  0/1     Completed   0               4m4s    10.42.72.65     ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-multus-8whsb                          0/1     Completed   0               4m4s    172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-snapshot-controller-crd-2d44p         0/1     Completed   0               4m4s    10.42.72.66     ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-snapshot-controller-kt4h6             0/1     Completed   1               4m4s    10.42.72.68     ip-172-31-17-51    <none>           <none>
kube-system       pod/helm-install-rke2-snapshot-validation-webhook-7g7hl     0/1     Completed   0               4m4s    10.42.72.71     ip-172-31-17-51    <none>           <none>
kube-system       pod/kube-apiserver-ip-172-31-17-51                          1/1     Running     0               4m14s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/kube-controller-manager-ip-172-31-17-51                 1/1     Running     0               4m20s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/kube-proxy-ip-172-31-17-51                              1/1     Running     0               4m13s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/kube-proxy-ip-172-31-18-234                             1/1     Running     0               98s     172.31.18.234   ip-172-31-18-234   <none>           <none>
kube-system       pod/kube-scheduler-ip-172-31-17-51                          1/1     Running     0               4m20s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-coredns-rke2-coredns-9849d5ddb-7vwwh               1/1     Running     0               3m51s   10.42.72.70     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-coredns-rke2-coredns-9849d5ddb-lct5w               1/1     Running     0               98s     10.42.2.192     ip-172-31-18-234   <none>           <none>
kube-system       pod/rke2-coredns-rke2-coredns-autoscaler-64b867c686-zrvzq   1/1     Running     0               3m51s   10.42.72.72     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-ingress-nginx-controller-jrnbk                     1/1     Running     0               46s     10.42.2.193     ip-172-31-18-234   <none>           <none>
kube-system       pod/rke2-ingress-nginx-controller-vxtmf                     1/1     Running     0               2m26s   10.42.72.77     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-metrics-server-544c8c66fc-f79ll                    1/1     Running     0               2m43s   10.42.72.73     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-multus-24p4k                                       1/1     Running     3 (3m16s ago)   3m52s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-multus-dhcp-6nfxq                                  1/1     Running     0               46s     172.31.18.234   ip-172-31-18-234   <none>           <none>
kube-system       pod/rke2-multus-dhcp-hrwpn                                  1/1     Running     0               3m12s   172.31.17.51    ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-multus-vfql6                                       1/1     Running     3 (54s ago)     99s     172.31.18.234   ip-172-31-18-234   <none>           <none>
kube-system       pod/rke2-snapshot-controller-59cc9cd8f4-bfs5c               1/1     Running     0               2m34s   10.42.72.76     ip-172-31-17-51    <none>           <none>
kube-system       pod/rke2-snapshot-validation-webhook-54c5989b65-7f54p       1/1     Running     0               2m37s   10.42.72.75     ip-172-31-17-51    <none>           <none>
tigera-operator   pod/tigera-operator-59d6c9b46-bxq2n                         1/1     Running     0               3m36s   172.31.17.51    ip-172-31-17-51    <none>           <none>

$ ls -l /run/cni/

total 0
srwxr-xr-x 1 root root 0 Jan 24 15:58 dhcp.sock

Additional context / logs:

N/A

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants