-
Notifications
You must be signed in to change notification settings - Fork 435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UdpClientStream fails to parse large responses #1973
Comments
Related to this is the fact that DNS truncation (TC=1) doesn't appear to be happening in my test. EDNS has max_payload set to 512 (the default), yet the response still comes through without truncation. I believe that issue may be related to the fact that I'm happy to raise this as a separate bug if you think it's a real issue. |
Maybe run a little |
@djc Yeah that was my thought as well. Perhaps we could use the EDNS max_payload value from the request if available, to trim this down a bit? And/or we could use the MTU of the network interface ... this would likely only be large-ish for a loopback interface. FYI, the 2048 value goes back all the way to the initial implementation: #46. |
Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. Updated the code to use the MTU for the local interface, if available. Otherwise default to the maximum packet size. Fixes: hickory-dns#1973
@bluejekyll @djc I threw together a WIP PR: #1975. Right now it just uses MTU, but we can expand it. Should probably also have a proper e2e test specifically for this problem. |
Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. Updated the code to use the MTU for the local interface, if available. Otherwise default to the maximum packet size. Fixes: hickory-dns#1973
Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. Updated the code to use the MTU for the local interface, if available. Otherwise default to the maximum packet size. Fixes: hickory-dns#1973
I guess the MTU is accurate for loopback interfaces at least, but I suppose actual network path MTUs would be constrained by any number of other interfaces on the path to the remote, so it's maybe not very meaningful? Additionally at least the crates I've looked at so far for retrieving the MTU are pretty heavyweight dependencies which IMO isn't very attractive. |
Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. Fixes: hickory-dns#1973
Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. Fixes: hickory-dns#1973
Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. Fixes: hickory-dns#1973
This fixes a couple of issues for UDP on both the client and server: * Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This PR sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. * The Server previously was not setting a maximum for the `BinEncoder`, which defaults to `u16::MAX` (i.e. effectively no truncation for UDP). This PR sets an appropriate maximum for the `BinEncoder` based on the response EDNS and protocol being used. Fixes: hickory-dns#1973
This fixes a couple of issues for UDP on both the client and server: * Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This PR sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. * The Server previously was not setting a maximum for the `BinEncoder`, which defaults to `u16::MAX` (i.e. effectively no truncation for UDP). This PR sets an appropriate maximum for the `BinEncoder` based on the response EDNS and protocol being used. Fixes: hickory-dns#1973
This fixes a couple of issues for UDP on both the client and server: * Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This PR sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. * The Server previously was not setting a maximum for the `BinEncoder`, which defaults to `u16::MAX` (i.e. effectively no truncation for UDP). This PR sets an appropriate maximum for the `BinEncoder` based on the response EDNS and protocol being used. Fixes: hickory-dns#1973
This fixes a couple of issues for UDP on both the client and server: * Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This PR sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. * The Server previously was not setting a maximum for the `BinEncoder`, which defaults to `u16::MAX` (i.e. effectively no truncation for UDP). This PR sets an appropriate maximum for the `BinEncoder` based on the response EDNS and protocol being used. Fixes: hickory-dns#1973
This fixes a couple of issues for UDP on both the client and server: * Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This PR sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. * The Server previously was not setting a maximum for the `BinEncoder`, which defaults to `u16::MAX` (i.e. effectively no truncation for UDP). This PR sets an appropriate maximum for the `BinEncoder` based on the response EDNS and protocol being used. Fixes: hickory-dns#1973
This fixes a couple of issues for UDP on both the client and server: * Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This PR sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. * The Server previously was not setting a maximum for the `BinEncoder`, which defaults to `u16::MAX` (i.e. effectively no truncation for UDP). This PR sets an appropriate maximum for the `BinEncoder` based on the response EDNS and protocol being used. Fixes: hickory-dns#1973
This fixes a couple of issues for UDP on both the client and server: * Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. hickory-dns#1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This PR sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. * The Server previously was not setting a maximum for the `BinEncoder`, which defaults to `u16::MAX` (i.e. effectively no truncation for UDP). This PR sets an appropriate maximum for the `BinEncoder` based on the response EDNS and protocol being used. Fixes: hickory-dns#1973
This fixes a couple of issues for UDP on both the client and server: * Previously, the UdpClientStream was using a fixed `2048` for the size of the receive buffer. This can cause problems on interfaces with a larger MTU. #1096 adjusted this value on the server side to 4096 (the maximum as recommended by RFC6891). This PR sets a constant that is shared by the UDP client and server. Additionally, the client uses EDNS in the request to further trim down the buffer size. * The Server previously was not setting a maximum for the `BinEncoder`, which defaults to `u16::MAX` (i.e. effectively no truncation for UDP). This PR sets an appropriate maximum for the `BinEncoder` based on the response EDNS and protocol being used. Fixes: #1973
Describe the bug
I'm trying to build a local test for truncation, and ran into an issue. The test uses
UdpClientStream
to call a local server (Catalog
+InMemoryAuthority
) which responds a large record set resulting in a UDP payload of 2080 bytes (verified via WireShark). Running with tracing enabled, I see:Digging in a little further, I see that the client is only seeing
2048
of the2080
bytes. The MTU forlo0
is16384
, so it's not being truncated down the stack (Wireshark confirms).The problem appears to be that
UdpClientStream
explicitly limits the size of the reply buffer to 2048. Since each read assumes a complete UDP packet, parsing fails.To Reproduce
See description.
Expected behavior
Replies larger than 2048 bytes should be supported by
UdpClientStream
.Docs for tokio UdpSocket indicate that reads should generally be done with the max UDP packet size of 65536. It may be a larger buffer than we'd like in general, but it would at least be correct.
System:
Version:
Crate: client, server
Version: 0.22.0
Additional context
NA
The text was updated successfully, but these errors were encountered: