Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configurable maximum packet size #3385

Closed
neilalexander opened this issue Apr 17, 2022 · 17 comments · Fixed by #4503
Closed

Configurable maximum packet size #3385

neilalexander opened this issue Apr 17, 2022 · 17 comments · Fixed by #4503
Milestone

Comments

@neilalexander
Copy link

neilalexander commented Apr 17, 2022

So far there is no way to configure a maximum packet size above 1252 bytes for IPv4 and 1232 bytes for IPv6. This means that on networks with a higher MTU (i.e. ethernet links in private networks with 9000 byte jumbo frames, or in my case, overlay networks with larger packet sizes up to 65535 bytes) we cannot take advantage of the extra packet sizes in streams and we cannot send datagrams larger than 1220 bytes.

It would be amazing if this was configurable for these environments.

@marten-seemann
Copy link
Member

How would you deal with cases where the actual MTU is smaller than you expect?

Furthermore, would this be a per-connection or a per-server option?

@neilalexander
Copy link
Author

I would naively expect that the message would be padded up to the expected size in the HELLO phase to test the viability of the MTU and if a problem is identified there, it would either fail to handshake or fall back to a lower MTU.

In our particular use-case, we have a solid guarantee from the layer beneath us that the MTU is always 65535 bytes so we don't even really want to do that (in fact it is quite an annoying/wasteful property of QUIC to have to send so much data up front, particularly over slow links, when you already know for certain that the MTU requirement is met, but I expect that's specced behaviour).

I would imagine that this would need to be configured per-connection if it is a part of the client initiation to send that much data to a given destination — the server can't possibly know if the return path is going to be asymmetric at any rate — but it's also not inconceivable that the server might want to limit the maximum allowable size for other reasons.

@dimalukas
Copy link

Anything new regarding configuring mtu for QUIC?
I need to set max mtu to 1200 for my cases.

@YRoelvink
Copy link

YRoelvink commented Sep 6, 2022

I am actually running into a similar issue as @dimalukas: I am using a link that has known compatibility issues with regards to PMTUD, so the ability to manually set a max MTU would help me greatly!

@marten-seemann
Copy link
Member

@YRoelvink What's the issue with PMTUD, and what's the MTU you'd like to set?

@YRoelvink
Copy link

YRoelvink commented Sep 7, 2022

I have had contact with my link provider (I am testing in a dev environment with a dedicated link) and they, for now, only informed me that there have been previous issues with PMTUD using similar setups. They did, however, suggest that decreasing the MTU could improve the performance of the overall system, so that is what I would like to verify.

@marten-seemann
Copy link
Member

I'd be curious what those issues what be.

PMTUD is designed to be very non-invasive: Occasionally, we'll send a packet that's larger than the current MTU. If that packet is acknowledged, we conclude that the link supports higher MTUs and we start sending packets with that packet size. We continue probing for larger packet size until one of those probe packets is lost, interpreting this loss as "this link doesn't support this packet size".

In effect, PMTUD does a binary search of the packet size space between 1280 and 1500 bytes. In total, we shouldn't send more than 10 probe packets over the life time of a connection.

@cliffc-spirent
Copy link
Contributor

I believe some network equipment can be (and is) configured to ignore the DF bit, so the larger discovery frames are fragmented and forwarded along and transparently defragmented at the far end. I have seen quic-go using an MTU of 1452 on my VPN with an MTU of 1400.

@marten-seemann
Copy link
Member

Once we implement GSO (see #2877 (comment) for recent discussion), we'll need much larger buffers to send out packets. It would be nice if we could make use of that to also enable the sending of jumbo packets.

Here's an API proposal for a Config change. MaxPacketSize would replace the DisablePathMTUDiscovery flag.

type Config struct {
     ...
     // MaxPacketSize is the maximum size of UDP datagrams.
     // QUIC connections start sending packets around 1280 bytes during the handshake,
     // and then run DPLPMTUD to determine the MTU, up to MaxPacketSize, supported by the link.
     // If unset, a default value of 1452 bytes is used.
     // Setting it to -1 disables DPLPMTUD.
     // Values above 65355 are invalid.
     MaxPacketSize int
}

With #3727 it would be possible to set this field (among others) based on the remote's IP address.

What do you guys think? Would that API work for your use case?

@cliffc-spirent
Copy link
Contributor

I think that could work for me.

@neilalexander
Copy link
Author

Ideally the maximum packet size and whether or not to run DPLPMTUD would be separate flags — in my use-case we have guarantees from the lower layer so don’t need to work upwards to discover the max packet size — but otherwise yes, this seems like a good step in the right direction!

@marten-seemann
Copy link
Member

@neilalexander want to make an API proposal? It needs to work on links where we don’t know the MTU in advance as well.

@neilalexander
Copy link
Author

@marten-seemann I can certainly try — now that I've thought about it some more, my best proposal would be to add an additional MinPacketSize flag too, which controls the starting packet size during the handshake, and automatically disable DPLPMTUD in the specific case that MinPacketSize and MaxPacketSize are set to the same value:

type Config struct {
     ...
     // MinPacketSize and MaxPacketSize control the packet sizes for UDP datagrams.
     // If MinPacketSize is unset, a default value of 1280 bytes will be used during the handshake.
     // If MaxPacketSize is unset, a default value of 1452 bytes will be used.
     // DPLPMTUD will automatically determine the MTU supported by the link up to the MaxPacketSize,
     // except for in the case where MinPacketSize and MaxPacketSize are configured to the same value,
     // in which case path MTU discovery will be disabled.
     // Values above 65355 are invalid.
     MinPacketSize int
     MaxPacketSize int
}

That way it gives the flexibility to set both upper and lower bounds and to optionally disable DPLPMTUD by just setting both options to the same value, i.e. I could just set MinPacketSize and MaxPacketSize to 65535 to get a fixed packet size of 64KB and disable DPLPMTUD for example, or I could set 1280 and 65535 and let DPLPMTUD work out what's best.

WDYT?

@marten-seemann
Copy link
Member

That would work as well. I'm just a bit concerned that MinPacketSize could be interpreted as "quic-go will never send packets smaller than this".

@neilalexander
Copy link
Author

Maybe InitialPacketSize as a name is less confusing?

@mixmasala
Copy link

+1 for this feature: I am trying to use quic-go to implement reliable transport over a lossy overlay network where the payload size is configurable but known. I implemented net.PacketConn, and this works great, but I'd like to be able to take advantage of larger payload sizes because the overlay network packet format adds quite a bit of overhead and 1200b is too small. I attempted to implement OOBCapablePacketConn but because socket options are set by syscall interface I'm not able to hook the Control function and set the returned size value to trick quic-go into trying larger payload sizes. In fact just setting a larger minimum without path MTU discovery would be ideal. Happy to help implement this feature, is anyone working on this?

@rhino1998
Copy link

+1 for this feature as well. Apple's new coredevicetunnel (for device debug/development) is very particular about packet/datagram sizes and it'll eagerly disconnect if it detects initial packets/size limits that are too small. The protocol tunnels ipv6 packets as QUIC datagrams and expects full MTU size (1420B) packets to be sendable/receivable immediately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants