Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MacVtap L2 Network connectivity (LLDP) only working while running tcpdump #97

Closed
fizzers123 opened this issue Mar 31, 2023 · 3 comments · Fixed by #98
Closed

MacVtap L2 Network connectivity (LLDP) only working while running tcpdump #97

fizzers123 opened this issue Mar 31, 2023 · 3 comments · Fixed by #98

Comments

@fizzers123
Copy link

fizzers123 commented Mar 31, 2023

What happened:
We are trying to establish L2 connectivity between KubeVirt VMs. MacVtap seems like a promising option for this, as it eliminates the bridge in the virt-launcher. When the VMs are successfully started, they can ping each other without a problem. Ping is possible on the same node and if the VMs are on different nodes.

Initially, the VMs do not see anyone with LLDP, and the underlying hypervisor or network switch sees both VMs. This can be seen in the screenshot below, which is from the Proxmox (sentinel) host that hosts the Kubernetes nodes, and it can see both the VM (vm-ubuntu-1) and the Kubernetes node (node1).

Proxmox (or core switch)
macvtap-proxmox-lldpd

What you expected to happen:

Now the interesting part comes. To debug this behavior, a tcpdump was started in the virtlaunchers net1 interface. This tcpdump is started using the network namespace of that container. As soon as this tcpdump is running, the VMs discovers the Proxmox via LLDP and each other, as long as they run on the same node.

macvtap-node1-tcpdump

For both VMs to discover each other, two tcpdumps will need to run, one for each VMs Net1 interface.

macvtap-vm-ubuntu-1-lldpd

How to reproduce it (as minimally and precisely as possible):

Enable Feature Gate
---
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  configuration:
    developerConfiguration: 
      featureGates:
        - Macvtap
     
Install macvtap-cni
kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.85.0/namespace.yaml
kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.85.0/network-addons-config.crd.yaml
kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.85.0/operator.yaml
---
apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  macvtap: {}
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: macvtap-deviceplugin-config
data:
  DP_MACVTAP_CONF: '[]'
  
Deploy Test VMs
---
kind: NetworkAttachmentDefinition
apiVersion: k8s.cni.cncf.io/v1
metadata:
  name: net1 # it needs to be named net1, otherwise the VM doesn't start. 
  annotations:
    k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/ens18
spec:
  config: '{
      "cniVersion": "0.3.1",
      "name": "net1",
      "type": "macvtap",
      "mtu": 1500
    }'
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: vm-ubuntu-1
spec:
  running: true
  template:
    metadata:
      labels:
        special: vmi-macvtap
    spec:
      nodeSelector:
        kubernetes.io/hostname: node1
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
          - name: l2-network
            macvtap: {}
        machine:
          type: ""
        resources:
          requests:
            memory: 1024M
      networks:
      - name: default
        pod: {}
      - name: l2-network
        multus: # Secondary multus network
          networkName: net1

      terminationGracePeriodSeconds: 0
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/containerdisks/ubuntu:22.04
        - name: cloudinitdisk
          cloudInitNoCloud:
            networkData: |
              version: 2
              ethernets:
                enp1s0:
                  dhcp4: true
                enp2s0:
                  addresses:
                    - 10.0.1.2/24
            userData: |-
              #cloud-config
              password: ubuntu
              chpasswd: { expire: False }
              ssh_authorized_keys:
                - ssh-rsa <key>
              packages: 
                - qemu-guest-agent
                - lldpd
                - nmap
              runcmd:
                - [ systemctl, start, qemu-guest-agent]
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: vm-ubuntu-2
spec:
  running: true
  template:
    metadata:
      labels:
        special: vmi-macvtap
    spec:
      nodeSelector:
        kubernetes.io/hostname: node1
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
          - name: l2-network
            macvtap: {}
        machine:
          type: ""
        resources:
          requests:
            memory: 1024M
      networks:
      - name: default
        pod: {}
      - name: l2-network
        multus: # Secondary multus network
          networkName: net1

      terminationGracePeriodSeconds: 0
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/containerdisks/ubuntu:22.04
        - name: cloudinitdisk
          cloudInitNoCloud:
            networkData: |
              version: 2
              ethernets:
                enp1s0:
                  dhcp4: true
                enp2s0:
                  addresses:
                    - 10.0.1.3/24
            userData: |-
              #cloud-config
              password: ubuntu
              chpasswd: { expire: False }
              ssh_authorized_keys:
                - ssh-rsa <key>
              packages: 
                - qemu-guest-agent
                - lldpd
                - nmap
              runcmd:
                - [ systemctl, start, qemu-guest-agent]

Start TCP Dump to enable L2 LLDP connectivity.

ip link show; ip -all netns exec ip link show
ip netns exec cni-<id> tcpdump -i net1 

Anything else we need to know?:
I have already posted this issue on Kubevirt but did not yet get a reply: kubevirt/kubevirt#9464

@fizzers123
Copy link
Author

Instead of tcpdump @fabiand mentioned, we can also just enable promiscuous mode.

ip link show; ip -all netns exec ip link show
ip netns exec cni-a1896a46-f32e-1880-b2aa-c2e75d85a3ed ip link set net1 promisc on
ip netns exec cni-5858b1d6-2b3f-5e7f-86bd-16746c6e94ca ip link set net1 promisc on

This results in LLDP working as well!

@maiqueb
Copy link
Collaborator

maiqueb commented Mar 31, 2023

Oh nice!

I think we could expose a knob in the configuration to enable CNI to set the promisc flag for you.

Would this be helpful ?

@fizzers123
Copy link
Author

yes, i think this would be helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants