-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DaemonSet for ContainerNetworking DHCP CNI Plugin #3917
Comments
That is probably the way I would recommend doing it. I will defer to @manuelbuil @rbrtbnfgl @thomasferrandiz on the best way to configure Multus, but at this point I do not believe we are planning on allowing configuration of multus CNI plugins via the --cni field, or packaging any additional CNIs. I do know that many plugins are already built in to multus; you might check out the docs at https://docs.rke2.io/install/network_options#using-multus-with-the-containernetworking-plugins and see if you can get the DHCP plugin working that way. |
The ContainerNetworking IPAM DHCP plugin is one of the plugins that's built into multus, and which is already included with it when you install it in RKE2. You can already successfully invoke the client side of that plugin from a stock RKE2 + multus install with a configuration and manifest like this: # /etc/rancher/rke2/config.yaml
---
cni:
- multus
- canal ---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"mode": "bridge",
"master": "eth0",
"ipam": { "type": "dhcp" }
}'
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
- image: busybox
name: example-container
command: ["sleep", "infinity"] Attempting to run that pod, the plugin is indeed found and invoked -- it just fails with a DHCP-plugin-specific error message when it can't find the Further, that daemon's binary is also already included with RKE2 when you install multus: The only thing that's missing is some mechanism to get that daemon to run alongside the daemon for multus itself. Perhaps |
Just to point out what I mean, here's a minimal alternative DaemonSet that can run the already-included-with-multus dhcp daemon binaries in a default busybox image: # multus-dhcp.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: multus-dhcp-ds
namespace: kube-system
labels:
tier: node
app: multus-dhcp
spec:
selector:
matchLabels:
tier: node
app: multus-dhcp
template:
metadata:
labels:
tier: node
app: multus-dhcp
spec:
hostNetwork: true
containers:
- name: dhcp
image: busybox
command: ["/opt/cni/dhcp", "daemon"]
securityContext:
privileged: true
volumeMounts:
- name: binpath
mountPath: /opt/cni
- name: socketpath
mountPath: /run/cni
initContainers:
- name: cleanup
image: busybox
command: ["rm", "-f", "/run/cni/dhcp.sock"]
securityContext:
privileged: true
volumeMounts:
- name: socketpath
mountPath: /host/run/cni
volumes:
- name: binpath
hostPath:
path: /opt/cni
- name: socketpath
hostPath:
path: /run/cni |
It sounds like we might need to bundle a subchart for that, similar to what we did for whereabouts in rancher/rke2-charts#272 |
I tried deploying the DaemonSet @AJMansfield provided, but now I'm getting this error when trying to start the pod, any ideas?
|
Adding /var/run/netns mount like so fixed my problem: ...
volumeMounts:
- name: netnspath
mountPath: /var/run/netns
mountPropagation: HostToContainer
...
volumes:
- name: netnspath
hostPath:
path: /run/netns I still can't start the pod tho
|
Have you by any chance tried using whereabouts instead of the DHCP IPAM, or does that not meet your needs? |
I haven't tried yet, should work for my needs too, but I just prefer using my own DHCP server if possible. DHCP daemon logs:
Seems like it can't reach the external DHCP server, the DHCP server logs indicates that no request as been made. My NetworkAttachmentDefinition:
Pod metadata:
Host ifconfig
Same result when trying to run the socket manual. (Instead of the DaemonSet) |
@Winor why do you have both |
In my case, "acquire an IP address from an external DHCP server" is actually essential for my application -- though, I ended up finding that I needed more control over the DHCP process itself (setting specific options, etc) so at this point I'm just using CNI to attach an unconfigured interface and having a udhcpc container handle the rest. It'd still be good to have the plugin functional though, even if I no longer need it for my use case. |
Good catch @thomasferrandiz that is definitely an invalid configuration. |
Yeah, I already noticed that and removed it, still same result. |
I did mange to get it to work with whereabouts as @brandond suggested, though I'm not sure if my configuration is right for what I wanted to achieve in the first place,
With this configuration, the pod starts, and I can see the network interface show up inside the pod with an IP address from the configured range, but I still seem to have no connectivity with the host network, can't send or receive ping requests from external devices in the network. |
have you tried tcpdumping on the bridge or master interface to see if the traffic shows up? If you have network policy rules in place, or ufw/firewalld enabled, that might also block the traffic, idk. |
I can add an optional manifest in the |
@brandond Didn't try tcpdumping, but I have no firewall enabled, anyway I ended up using Ipvlan instead, it just works, so I'll keep that for now. Once strange behaviour I noticed is that Multus won't attach network interfaces in the pods after boot. @thomasferrandiz that could be great! Thank you both for your help :) |
Validation steps:
|
Validated on master with a4986a5 / version 1.29Environment DetailsInfrastructure
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
Config.yaml:
Additional files
Testing Steps
Replication Results:
Validation Results:
Additional context / logs:
|
The Problem
When setting up an RKE2 cluster to use Multus, it's not clear what the appropriate way is to set up and configure the DHCP daemon needed to allow the ContainerNetworking DHCP IPAM plugin to function.
Though there's ways of getting this daemon to run using DaemonSets or systemd units from other projects, the fact that the binary for this daemon (
/opt/cni/dhcp
) is distributed with RKE2 Multus suggests that it ought to also be run without needing additional steps much more complicated than those for running Multus in the first place.The Solution I Want
I'd like to be able to add
--cni=multus-dhcp
as another RKE2 server argument, similar to specifying--cni=multus
for getting Multus set up.(Or equivalently from the server
config.yaml
:`# /etc/rancher/rke2/config.yaml cni: - multus +- multus-dhcp - canal
On starting, the server would use this to place/install the appropriate manifest at
/var/lib/rancher/rke2/server/manifests/rke2-multus-dhcp.yaml
, and from that install arke2-multus-dhcp
Addon and create therke2-multus-dhcp-ds
DaemonSet to run the plugin daemon.The Alternative Solutions I Already Have
The solution I'm using for now, is to add a copy of the k8snetworkingwg reference-deployment dhcp-daemonset.yaml to the server manifest folder myself. The DHCP plugin is perfectly functional when set up this way; the only plausible issue with it is the third-party dependency it introduces, something I will eventually need to resolve.
Before I found the DaemonSet method above, I also had it working using systemd to run the daemon directly in the host. The plugin authors have pre-made systemd unit files for this which work perfectly, and this is a superior solution in the sense that it only starts the daemon on demand (via systemd socket activation). But the daemon is already very lightweight, so the scalability disadvantage of this method led me to switch to using a DaemonSet.
The text was updated successfully, but these errors were encountered: