Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Virtio-Net driver is not respecting VIRTIO_NET_F_MRG_RXBUF #1090

Closed
ddaney-fungible opened this issue Apr 18, 2024 · 13 comments
Closed

Virtio-Net driver is not respecting VIRTIO_NET_F_MRG_RXBUF #1090

ddaney-fungible opened this issue Apr 18, 2024 · 13 comments

Comments

@ddaney-fungible
Copy link

If a Virtio-Net PCI function (Device) does not advertise the VIRTIO_NET_F_MRG_RXBUF then the driver must supply a single descriptor RX buffer large enough to contain the virtio_net_hdr and the RX packet contents.

The Windows driver does not do this, it supplies a two descriptor buffer, one for the virtio_net_hdr and another for the packet contents, this behavior is explicitly prohibited by the specification.

@YanVugenfirer
Copy link
Collaborator

I think this is an unfortunate remnant of the support for the pre-1.0 virtio spec.

@ybendito
Copy link
Collaborator

Probably was broken long time ago. Probably the driver should fail to start if this feature is not present until this is fixed. Thank you for pointing this out.

@YanVugenfirer
Copy link
Collaborator

@ddaney-fungible Out of curiosity - do you experience issues with the non-compliant driver on HW or other than QEMU\KVM hypervisor?

@ddaney-fungible
Copy link
Author

@ddaney-fungible Out of curiosity - do you experience issues with the non-compliant driver on HW or other than QEMU\KVM hypervisor?

Yes. The failure was seen in a virtio-net function that is not part of Qemu. On the system in question EDK2 and Linux drivers seem to work well, but it was observed that this Windows driver was supplying buffers in two chunks, one of size 12 for the header and the second for the packet payload.

@ybendito
Copy link
Collaborator

ybendito commented Apr 22, 2024

@ddaney-fungible Can you please share with us the device features as suggested during feature negotiation? (better in hex)
Thanks.

@ddaney-fungible
Copy link
Author

@ddaney-fungible Can you please share with us the host features as suggested during feature negotiation? (better in hex) Thanks.

device_feature[0] = 0x00010029
defice_feature[1] = 0x00000057

Driver is responding with:

driver_feature[0] = 0x00010029
driver_feature[1] = 0x00000007

@ybendito
Copy link
Collaborator

ybendito commented Apr 24, 2024

@ddaney-fungible
Just some notes to the feature set:
The device sets VIRTIO_NET_F_CTRL_GUEST_OFFLOADS(2) but this does not have any effect as the device does not have a control queue and does not indicate support for any guest offloads,
The device sets VIRTIO_NET_F_GSO(6) which does not have any effect as this bit is somehow may make sense only for legacy device
The features 4 and 48 are not defined in the spec and so ignored but in further editions of the spec they may designate something unexpected
All this, of course, is not related to the fact that the driver currently ignores VIRTIO_NET_F_MRG_RXBUF

@ddaney-fungible
Copy link
Author

I think there is a misunderstanding of how to interpret those values.

device_feature[0] = VIRTIO_NET_F_CSUM |
VIRTIO_NET_F_MTU |
VIRTIO_NET_F_MAC |
VIRTIO_NET_F_STATUS

device_feature[1] = VIRTIO_F_VERSION_1 |
VIRTIO_F_NOTIFICATION_DATA |
VIRTIO_F_ACCESS_PLATFORM |
VIRTIO_F_ORDER_PLATFORM |
VIRTIO_F_RING_PACKED

@ybendito
Copy link
Collaborator

@ddaney-fungible Sorry, by mistake I swapped them when read

@ybendito
Copy link
Collaborator

ybendito commented Sep 9, 2024

@ddaney-fungible
The virtio spec says:
5.1.6.3.1 Driver Requirements: Setting Up Receive Buffers
• If VIRTIO_NET_F_MRG_RXBUF is not negotiated:
– If VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6 or VIRTIO_NET_F_GUEST_-
UFO are negotiated, the driver SHOULD populate the receive queue(s) with buffers of at least
65562 bytes.
– Otherwise, the driver SHOULD populate the receive queue(s) with buffers of at least 1526 bytes.
• If VIRTIO_NET_F_MRG_RXBUF is negotiated, each buffer MUST be at least the size of the struct
virtio_net_hdr.
Note: Obviously each buffer can be split across multiple descriptor elements.
If VIRTIO_NET_F_MQ is negotiated, each of receiveq1.. .receiveqN that will be used SHOULD be populated
with receive buffers.
5.1.6.3.2 Device Requirements: Setting Up Receive Buffers
The device MUST set num_buffers to the number of descriptors used to hold the incoming packet.
The device MUST use only a single descriptor if VIRTIO_NET_F_MRG_RXBUF was not negotiated.
Note: This means that num_buffers will always be 1 if VIRTIO_NET_F_MRG_RXBUF is not negotiated.

In the terminology of the spec the "descriptor" is one or more "descriptor elements", combined together using VIRTQ_DESC_F_NEXT, see "2.8.6 Next Flag: Descriptor Chaining" (in the indirect area, if supported, or several consecutive elements in the descriptor queue)

The point of VIRTIO_NET_F_MRG_RXBUF is that this virtio-net specific feature allows to place a network packet over several consecutive descriptors (not chained together with VIRTQ_DESC_F_NEXT).
Chaining the "descriptor elements" into one "descriptor" is virtio transport feature, common for all the virtio devices.

Probably this is a good idea to reduce number of descriptor elements and we'll evaluate the possibility to do so, but I think there is no violation of the spec. We can request the comment from virtio-comment or virtio-dev mail lists if needed.

@ddaney-fungible
Copy link
Author

OK, it seems like I was conflating the meanings of Buffer and Segment. At this point it is somewhat academic, as I have adjusted the device behavior to work with the driver.

@YanVugenfirer
Copy link
Collaborator

@ddaney-fungible I am wondering if you support RSS in your HW and if you aware of the virtio spec parts related to RSS.
Also, it will be interesting to hear about the offload support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants