-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configurable maximum packet size #3385
Comments
How would you deal with cases where the actual MTU is smaller than you expect? Furthermore, would this be a per-connection or a per-server option? |
I would naively expect that the message would be padded up to the expected size in the In our particular use-case, we have a solid guarantee from the layer beneath us that the MTU is always 65535 bytes so we don't even really want to do that (in fact it is quite an annoying/wasteful property of QUIC to have to send so much data up front, particularly over slow links, when you already know for certain that the MTU requirement is met, but I expect that's specced behaviour). I would imagine that this would need to be configured per-connection if it is a part of the client initiation to send that much data to a given destination — the server can't possibly know if the return path is going to be asymmetric at any rate — but it's also not inconceivable that the server might want to limit the maximum allowable size for other reasons. |
Anything new regarding configuring mtu for QUIC? |
I am actually running into a similar issue as @dimalukas: I am using a link that has known compatibility issues with regards to PMTUD, so the ability to manually set a max MTU would help me greatly! |
@YRoelvink What's the issue with PMTUD, and what's the MTU you'd like to set? |
I have had contact with my link provider (I am testing in a dev environment with a dedicated link) and they, for now, only informed me that there have been previous issues with PMTUD using similar setups. They did, however, suggest that decreasing the MTU could improve the performance of the overall system, so that is what I would like to verify. |
I'd be curious what those issues what be. PMTUD is designed to be very non-invasive: Occasionally, we'll send a packet that's larger than the current MTU. If that packet is acknowledged, we conclude that the link supports higher MTUs and we start sending packets with that packet size. We continue probing for larger packet size until one of those probe packets is lost, interpreting this loss as "this link doesn't support this packet size". In effect, PMTUD does a binary search of the packet size space between 1280 and 1500 bytes. In total, we shouldn't send more than 10 probe packets over the life time of a connection. |
I believe some network equipment can be (and is) configured to ignore the DF bit, so the larger discovery frames are fragmented and forwarded along and transparently defragmented at the far end. I have seen quic-go using an MTU of 1452 on my VPN with an MTU of 1400. |
Once we implement GSO (see #2877 (comment) for recent discussion), we'll need much larger buffers to send out packets. It would be nice if we could make use of that to also enable the sending of jumbo packets. Here's an API proposal for a type Config struct {
...
// MaxPacketSize is the maximum size of UDP datagrams.
// QUIC connections start sending packets around 1280 bytes during the handshake,
// and then run DPLPMTUD to determine the MTU, up to MaxPacketSize, supported by the link.
// If unset, a default value of 1452 bytes is used.
// Setting it to -1 disables DPLPMTUD.
// Values above 65355 are invalid.
MaxPacketSize int
} With #3727 it would be possible to set this field (among others) based on the remote's IP address. What do you guys think? Would that API work for your use case? |
I think that could work for me. |
Ideally the maximum packet size and whether or not to run DPLPMTUD would be separate flags — in my use-case we have guarantees from the lower layer so don’t need to work upwards to discover the max packet size — but otherwise yes, this seems like a good step in the right direction! |
@neilalexander want to make an API proposal? It needs to work on links where we don’t know the MTU in advance as well. |
@marten-seemann I can certainly try — now that I've thought about it some more, my best proposal would be to add an additional
That way it gives the flexibility to set both upper and lower bounds and to optionally disable DPLPMTUD by just setting both options to the same value, i.e. I could just set WDYT? |
That would work as well. I'm just a bit concerned that |
Maybe |
+1 for this feature: I am trying to use quic-go to implement reliable transport over a lossy overlay network where the payload size is configurable but known. I implemented net.PacketConn, and this works great, but I'd like to be able to take advantage of larger payload sizes because the overlay network packet format adds quite a bit of overhead and 1200b is too small. I attempted to implement OOBCapablePacketConn but because socket options are set by syscall interface I'm not able to hook the Control function and set the returned size value to trick quic-go into trying larger payload sizes. In fact just setting a larger minimum without path MTU discovery would be ideal. Happy to help implement this feature, is anyone working on this? |
+1 for this feature as well. Apple's new coredevicetunnel (for device debug/development) is very particular about packet/datagram sizes and it'll eagerly disconnect if it detects initial packets/size limits that are too small. The protocol tunnels ipv6 packets as QUIC datagrams and expects full MTU size (1420B) packets to be sendable/receivable immediately. |
So far there is no way to configure a maximum packet size above 1252 bytes for IPv4 and 1232 bytes for IPv6. This means that on networks with a higher MTU (i.e. ethernet links in private networks with 9000 byte jumbo frames, or in my case, overlay networks with larger packet sizes up to 65535 bytes) we cannot take advantage of the extra packet sizes in streams and we cannot send datagrams larger than 1220 bytes.
It would be amazing if this was configurable for these environments.
The text was updated successfully, but these errors were encountered: