Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

peek blocks after read on windows #1755

Closed
Sytten opened this issue Jan 25, 2024 · 7 comments
Closed

peek blocks after read on windows #1755

Sytten opened this issue Jan 25, 2024 · 7 comments

Comments

@Sytten
Copy link

Sytten commented Jan 25, 2024

Hi! I wanted to continue the issue of tokio-rs/tokio#3789

I used the following code as reproduction:

use tokio::io::AsyncReadExt;

#[tokio::main]
async fn main() -> std::io::Result<()> {
    let listener = tokio::net::TcpListener::bind("0.0.0.0:2222").await?;

    loop {
        let (mut socket, _) = listener.accept().await?;

        tokio::spawn(async move {
            let mut buf = [0; 1];
            let _ = socket.read(&mut buf).await;
            let _ = socket.peek(&mut buf).await;
        });
    }
}

So I don't know much about Tokio internals, but I spent the last couple of hours debugging.

I found that the waker for the Readiness future used by peek is never called after there was one call to read. Digging deeper the GetQueuedCompletionStatusEx is blocking when issuing the peek after read.

I think this is because Windows doesn't know we are interested by READABLE because peek never registers the fact that it has the READABLE interest at the OS level. It does register the interest at the ScheduledIO level but we never get there obviously because the thread is parked.

When we issue a read we use the IoSourceState which reregisters our interests with the IOCP if it would block, but when we peek we directly use the STD IO. See:

mio/src/net/tcp/stream.rs

Lines 210 to 212 in 08ee541

pub fn peek(&self, buf: &mut [u8]) -> io::Result<usize> {
self.inner.peek(buf)
}

I think the fix is to use IoSourceState::do_io when we peek, but I must test it.

@Thomasdezeeuw
Copy link
Collaborator

Can you reproduce the error with Mio types (not Tokio)?

@Sytten
Copy link
Author

Sytten commented Jan 25, 2024

If you look at the other issue someone did indeed reproduce it only with mio

@Thomasdezeeuw
Copy link
Collaborator

If you look at the other issue someone did indeed reproduce it only with mio

Then why do you include an example using Tokio in an issue for Mio?

@Sytten
Copy link
Author

Sytten commented Jan 25, 2024

I am sorry this is the first time I try to contribute to the project, the line between the projects were still a bit blurry to me. It was just a simpler example for me to use. I figured I could try to help since the original issue didn't get much traction.

@Sytten
Copy link
Author

Sytten commented Jan 25, 2024

Since we are not sure who's fault it is, I will close the issue for now. If someone has the same problem feel free to re-open with a proper mio example.

@Thomasdezeeuw
Copy link
Collaborator

I left a comment in the Tokio thread, but if there is a problem with Mio then I still like to know, so feel free to reopen when/if it's clear it a problem with Mio.

@Sytten
Copy link
Author

Sytten commented Jan 26, 2024

I commented back, unsure if I have enough knowledge to create a reproduction. I didn't see any test around peek in mio though, so at the very least adding a test to confirm that mio is working as intended would be good. That would need to check that:

  • There is a first readiness event on first data
  • Test does a peek -> Receives data
  • Test does a read -> Receives data
  • Test does a peek -> Receives WouldBlock
  • New data is in the socket
  • There is no readiness event sent by mio

If and only if this is the case for all platform would I consider mio "bug free". I highly suspect that the last check will fail on unix platforms, but maybe not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants