Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: UdpSocket::peek_sender() #5491

Closed
abonander opened this issue Feb 21, 2023 · 13 comments · Fixed by #5520
Closed

Proposal: UdpSocket::peek_sender() #5491

abonander opened this issue Feb 21, 2023 · 13 comments · Fixed by #5520
Labels
A-tokio Area: The main tokio crate C-feature-request Category: A feature request. M-net Module: tokio/net

Comments

@abonander
Copy link
Contributor

abonander commented Feb 21, 2023

Is your feature request related to a problem? Please describe.
For implementing a UDP-based gossip protocol, I would prefer not to require a separate connected socket per peer connection. While recv_from() and send_to() get me most of the way there, there's a piece of the puzzle missing.

When handling a datagram on a bound socket, it can be important to know the sender of the next datagram in the queue without consuming it, e.g. to know if we even want to read the datagram into application memory or not, or to call up application state for that particular peer such as a DTLS session.

For DTLS specifically, the OpenSSL implementation provides its own read buffer per session, so we want to get the peer address of the datagram, call up the session, and then read the datagram into the DTLS read buffer, preferably without any redundant copies.

UdpSocket::peek_from() is almost what we need, but the Windows-specific behavior is problematic: we're forced to provide a buffer large enough to read the whole datagram whether we actually want to yet or not, or else the sender information will be discarded due to the WSAEMSGSIZE error.

Describe the solution you'd like
I experimented with directly calling recvfrom() on Windows and found that it does populate the sockaddr out-pointer even when it returns WSAEMSGSIZE, so I'd propose that UdpSocket gain a new method:

impl UdpSocket {
    /// This could/should also have a `poll_peek_sender()` and `try_peek_sender()` counterpart.
    pub async fn peek_sender(&self) -> std::io::Result<SocketAddr> {
        // ...
    }
}

For non-Windows platforms this can trivially be implemented in terms of peek_from(), but on Windows this should make a direct recvfrom() call, passing a zero-size buffer, setting the MSG_PEEK flag, and suppressing the WSAEMSGSIZE error.

Part of the purpose of opening this issue is to get feedback on where this should live, since I understand that most of the I/O methods in Tokio invoke methods in Mio which in turn invoke their std counterparts. I would prefer not to have to go through the process of championing an RFC because I simply don't have the time or energy to endure that process these days.

Describe alternatives you've considered
Ideally peek_from() could just suppress the WSAEMSGSIZE error in the name of consistency as that's a Windows-specific idiosyncrasy, but that would be up to std::net::UdpSocket::peek_from(), and it would be a behavior change.

I can just use recv_from(), read the datagram and get the sender every time, then copy the data into the DTLS read buffer if applicable, but that's a redundant copy which could be up to 64KiB in size every datagram. That's a potential vector for a denial-of-service attack; in practice our application limits datagrams to a more reasonable size just under the Ethernet MTU, but still a redundant copy of every datagram can add up if they're all 1+ KiB.

I can just make the recvfrom call directly in my application, however that's suboptimal as it requires upholding a number of unsafe invariants. Anyone else implementing this themselves would be at risk of making any number of mistakes that could result in undefined or just wrong behavior. As noted on the Gist I actually forgot to convert the port value from big-endian, so even my first attempt had a pretty significant bug.

socket2 is probably the most appropriate non-std place for this to live, but for convenience I would like to see it available in Tokio as well, as shuttling back and forth between socket2::Socket and tokio::net::UdpSocket is still potentially quite error-prone.

Additional context
I had previously spoken to @Noah-Kennedy on the Tokio Discord about this so I'd like his input here.

@abonander abonander added A-tokio Area: The main tokio crate C-feature-request Category: A feature request. labels Feb 21, 2023
@Darksonn Darksonn added the M-net Module: tokio/net label Feb 21, 2023
@notgull
Copy link
Contributor

notgull commented Feb 22, 2023

Is there a PR/proposal to add this to libstd? I think this could also be useful there.

@abonander
Copy link
Contributor Author

abonander commented Feb 22, 2023

That would most likely require an RFC which I don't have the bandwidth to deal with. If someone else wants to do that or has an end-run around the RFC process, by all means.

As far as I can tell, the API of UdpSocket has remained largely unchanged for a long time, and the issue of peek_from()'s behavior on Windows was not discussed in the tracking issue for its stabilization.

I would be happy to open a PR to any of the non-std crates that concern this feature (socket2, Mio, Tokio) if we come to a consensus on where the base implementation should actually live.

@abonander
Copy link
Contributor Author

Since Tokio does use socket2 internally, I reckon that'd be the place to incubate the blocking API version.

@Darksonn
Copy link
Contributor

The socket2 crate would certainly be the obvious place to start.

@abonander
Copy link
Contributor Author

I can open a PR to socket2.

@abonander
Copy link
Contributor Author

rust-lang/socket2#389 has been merged, not sure when 0.5 will be released though.

@atouchet
Copy link

socket2 0.5 has been released.

abonander added a commit to abonander/tokio that referenced this issue Feb 27, 2023
@abonander
Copy link
Contributor Author

abonander commented Feb 27, 2023

@Darksonn @notgull I just noticed that using socket2 0.5 would require an MSRV bump to 1.63.0 because of its use of std::os::fd::AsFd. Is that acceptable? It was released on 2022-08-11 which is a couple weeks more than six months ago.

@Darksonn
Copy link
Contributor

If possible, we would like to avoid doing an MSRV bump just for the sake of a dependency.

@abonander
Copy link
Contributor Author

Yeah, that just means backporting my addition to 0.4.

@Thomasdezeeuw
Copy link
Contributor

I think that adding I/O safety (https://rust-lang.github.io/rfcs/3128-io-safety.html) trait implementations directly to Tokio would also be worth doing, which would also require a MSRV bump. It makes working with raw(-ish) fd much less error prone.

@Darksonn
Copy link
Contributor

We're currently looking into implementing the I/O safety traits conditionally using a build script to detect the Rust version. See #5514 for this, or the thread in #tokio-dev on discord.

abonander added a commit to abonander/tokio that referenced this issue Mar 1, 2023
abonander added a commit to abonander/tokio that referenced this issue Mar 1, 2023
@abonander
Copy link
Contributor Author

Opened #5520 contingent on socket2 0.4.8 being released.

abonander added a commit to abonander/tokio that referenced this issue Mar 3, 2023
abonander added a commit to abonander/tokio that referenced this issue Mar 4, 2023
abonander added a commit to abonander/tokio that referenced this issue Mar 4, 2023
abonander added a commit to abonander/tokio that referenced this issue Mar 7, 2023
abonander added a commit to abonander/tokio that referenced this issue Mar 7, 2023
abonander added a commit to abonander/tokio that referenced this issue Mar 7, 2023
abonander added a commit to abonander/tokio that referenced this issue Mar 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tokio Area: The main tokio crate C-feature-request Category: A feature request. M-net Module: tokio/net
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants