Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defining two or more SR-IOV interfaces that use the same underling resource is buggy #7444

Closed
ormergi opened this issue Mar 24, 2022 · 4 comments · Fixed by #8226
Closed

Comments

@ormergi
Copy link
Contributor

ormergi commented Mar 24, 2022

What happened:
A VM with two or more SR-IOV interfaces from the same resource pool might end up with each interface connected to the wrong network.

For example #6351 (comment)

The problem seems to be the fact that there is no mapping between the SR-IOV device PCI address and the
NetworkAttachmentDefinition, but only between the SR-IOV device PCI address to the resource-name.
Therefore there is no differentiation between the devices and the mapping is inconsistent.

What you expected to happen:
Each SR-IOV interface should be attached to the correct network as specified in the VM manifest.

How to reproduce it (as minimally and precisely as possible):

Additional context:
We discover this while investigating #6351

Environment:

  • KubeVirt version (use virtctl version): N/A
  • Kubernetes version (use kubectl version): N/A
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Kernel (e.g. uname -a): N/A
  • Install tools: N/A
  • Others: N/A
@ormergi
Copy link
Contributor Author

ormergi commented Mar 24, 2022

/sig network

@YitzyD
Copy link
Contributor

YitzyD commented Jun 3, 2022

We are experiencing this issue as well.
Our setup:

The issue seems to be related to k8snetworkplumbingwg/multus-cni#499. As the order of host PCI addresses in the environment variable PCIDEVICE_XXX= that is set by either multus or the sriov-cni (I have yet to investigate which controls the injected environment variable) is indeterminate, the virt-launcher converter cannot (and likely should not) depend on it for assigning hostdev source addresses to interface devices.

The issue is in in the following peices of code:
1)

func NewPCIAddressPool(hostDevises []v1.HostDevice) *hostdevice.AddressPool {

leads to:
func (p *AddressPool) load(resourcePrefix string, resources []string) {

  • wherein virtwrap sets up a pool of PCI addresses from the PCIDEVICE_XXX environment variable to be used when generating the domain xml

 
2)

func createHostDevices(hostDevicesData []HostDeviceMetaData, addrPool AddressPooler, createHostDev createHostDevice) ([]api.HostDevice, error) {

leads to:
func createPCIHostDevice(hostDeviceData HostDeviceMetaData, hostPCIAddress string) (*api.HostDevice, error) {

  • wherein according to the order of interfaces in the VMI yaml, a PCI address from the pool is assigned to the hostdev source address

 
Example:

virt-launcher env:

PCIDEVICE_VF=0000:e1:0a.3,0000:e1:0b.0
KUBEVIRT_RESOURCE_NAME_sriov0=vf
KUBEVIRT_RESOURCE_NAME_sriov1=vf

vmi spec:

...
interfaces:
  - name: sriov0
    sriov: {}
  - name: sriov1
    sriov: {}

sriov0 will be assigned to host PCI device: 0000:e1:0a.3
sriov1 will be assigned to host PCI device: 0000:e1:0b.0

(I have yet to test this, but it seems as though multus/sriov-cni is assigning the PCIDEVICE_XXX in order of addresses by numerical value)

This is irrespective of whether the VF programmed by the sriov cni for the PCI device is associated with the network attachment definition name in the vmi networks array.
The outcome is that the domain XML will randomly associate one multus network (by name) with the incorrect host device, ei. there is no knowing if, say, 0000:e1:0a.3 is associated with the VF programmed for network attachment definition sriov0:

<hostdev mode='subsystem' type='pci' managed='no'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0xe1' slot='0x0a' function='0x3'/>       <======= ||
      </source>                                                                  ====== Who's to say that this mapping is correct?
      <alias name='ua-sriov-multus-sriov-network-sriov0'/>                     <==================== ||
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='no'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0xe1' slot='0x0b' function='0x0'/>
      </source>
      <alias name='ua-sriov-multus-sriov-network-sriov1'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
</hostdev>

Solutions:

  1. Patch the CNIs so that the order of PCI addresses assigned to the PCIDEVICE_XXX environment variable is in some determinable order.
  2. Virt-launcher will have to use a different method of assigning the hostdev source PCI address so that it matches the network attachment definition name

@ormergi
Copy link
Contributor Author

ormergi commented Jun 6, 2022

@YitzyD Thanks for sharing and detailed info 🙂
To workaround this, you can create a VFs device pool (sriov-device-plugin config or SriovNetwotkPolicy if SR-IOV network operator deployed) for each group of VF's by their configuration [1].
For example: one VFs pool that will be configured with VLAN 20 and another for VFs with VLAN 30.

In any case we plan to fix it properly.

RamLavi added a commit to RamLavi/kubevirt that referenced this issue Jul 31, 2022
current sriov mapping is flawed on vmi's with more than 2 SRIOV
networks [0]. Adding a Xfail test to record it.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 1, 2022
current sriov mapping is flawed on vmi's with more than 2 SRIOV
networks [0]. Adding a Xfail test to record it.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 1, 2022
current sriov mapping is flawed on vmi's with more than 2 SRIOV
networks [0]. Adding a Xfail test to record it.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 2, 2022
current sriov mapping is flawed on vmi's with more than 2 SRIOV
networks [0]. Adding a Xfail test to record it.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 8, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 8, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 8, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 8, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 9, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 9, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 11, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 11, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 11, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 15, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 16, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 16, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 16, 2022
This commit utilises the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.
This commit solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 17, 2022
Utilise the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 18, 2022
Utilise the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 21, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 22, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 22, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 25, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 28, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 28, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 28, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 28, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks
Added appropriate SRIOV e2e test.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 28, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 29, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Aug 29, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 4, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 4, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 5, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
kubevirt-bot pushed a commit to kubevirt-bot/kubevirt that referenced this issue Sep 6, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 6, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 6, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 6, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 7, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 7, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 7, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 7, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
RamLavi added a commit to RamLavi/kubevirt that referenced this issue Sep 7, 2022
Utilize the SRIOV-network PCI-address mapping that was
digested from the multus network-status annotation and mounted to a
volume, in order to create the SRIOV devices.
If the map does not exist, then the mapping will fall back to the former
approach.

Solves issue [0] reported on flawed mapping of sriov on VMIs
with multiple SRIOV networks.

[0] kubevirt#7444

Signed-off-by: Ram Lavi <ralavi@redhat.com>
@ormergi
Copy link
Contributor Author

ormergi commented Sep 20, 2022

@YitzyD
Fixed by #8226, available on versions: v0.53.0, v0.54.0, v0.55.0, v0.56.0 and above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants