Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent edge-triggered behavior between platforms #1612

Closed
sullivan-sean opened this issue Aug 19, 2022 · 5 comments
Closed

Inconsistent edge-triggered behavior between platforms #1612

sullivan-sean opened this issue Aug 19, 2022 · 5 comments

Comments

@sullivan-sean
Copy link

sullivan-sean commented Aug 19, 2022

The current edge triggered behavior on Windows is not consistent with the behavior of EPOLLET in epoll or EV_CLEAR in kqueue. This can be seen with the following minimal example: https://github.com/sullivan-sean/mio-edge-triggered

On Windows, interests for a source are cleared on event receipt and reset on WouldBlock errors. If a source is polled until readable and then a read call is made this does not reset the interests. Another call to read after all data has been read (to throw WouldBlock) is necessary to reset interests.

Edge triggered behavior, as described in the epoll docs, "only delivers events when changes occur on the monitored file descriptor." There is a question of whether a partial read is actually considered a change on a file descriptor, i.e. is the Windows behavior still technically "edge-triggered" and in some ways more correct by waiting for WouldBlock to reset interests? In playing around with the epoll and kqueue behavior, it seems that even if only a small read is performed and there is still more data to be read, a subsequent poll will still emit a read-readiness event. This suggests that in practice short reads do reset "read-readiness" in edge-triggered mode.

For consistency, then, sources on Windows would need to reset interests on successful I/O and not just WouldBlock errors.

@Noah-Kennedy
Copy link

On tokio, we have an optimization which is disabled on windows because of this behavior.

@Thomasdezeeuw
Copy link
Collaborator

This is actually not strictly inconsistent, it's a choice we made with the initial Windows (re)implementation. It's documented here: https://docs.rs/mio/latest/mio/struct.Poll.html#draining-readiness, that we only guarantee that we return events once WouldBlock are returned.

Since then I've opened #1611 to reconsider this. I think that issue is about the same thing as this issue.

Edge triggered behavior, as described in the epoll docs, "only delivers events when changes occur on the monitored file descriptor." There is a question of whether a partial read is actually considered a change on a file descriptor

IIRC for epoll and kqeue you need to (e.g.) read all bytes in the buffer before you get another event. This happens when you call read(2) with a buffer and get n < buffer.len() bytes back.

, i.e. is the Windows behavior still technically "edge-triggered" and in some ways more correct by waiting for WouldBlock to reset interests?

Like I said this is something we decided on before for initial consistency, but it's something we can reconsider.

In playing around with the epoll and kqueue behavior, it seems that even if only a small read is performed and there is still more data to be read, a subsequent poll will still emit a read-readiness event. This suggests that in practice short reads do reset "read-readiness" in edge-triggered mode.

Is a small read here with a small buffer that is completely filled? Because that shouldn't retrigger (create another events), only with a big (enough) buffer not completely filling it.

For consistency, then, sources on Windows would need to reset interests on successful I/O and not just WouldBlock errors.

Something we might be changing in #1611.

@sullivan-sean
Copy link
Author

sullivan-sean commented Aug 19, 2022

Is a small read here with a small buffer that is completely filled? Because that shouldn't retrigger (create another events), only with a big (enough) buffer not completely filling it.

By small read here I meant if I write 15 bytes to a buffer and then read 5, there are still 10 bytes left to be read.

EDIT: this is quite confusing considering the usage of "short" read elsewhere (in your issue and the SO post I linked) means a read that doesn't fill the provided buffer

In playing around with the epoll and kqueue behavior, it seems that even if only a small read is performed and there is still more data to be read, a subsequent poll will still emit a read-readiness event.

I was actually incorrect here, what I was really testing when I was playing around was write -> read -> write -> read. Even if the first read is blocking or "small", the second write will reset the interests in epoll and kqueue, but it does not in the current Windows implementation because there is no non-blocking call.

i.e. The following reliably produces inconsistent behavior right now

stream.write(b"Hello world!");
stream.flush();

expect_events(&mut poll, &mut events, |e| e.is_readable());

let mut buf = [0; 5];
stream.read(&mut buf);

stream.write(b"Hello world!");
stream.flush();
expect_events(&mut poll, &mut events, |e| e.is_readable());

because the second write resets the readable interest in epoll and kqueue but not on Windows and so the last line will panic on Windows but not on unix (you can easily reproduce this particular inconsistency in the linked repo)

But I was wrong in saying that just a small read is inconsistent. i.e.

stream.write(b"Hello world!");
stream.flush();

expect_events(&mut poll, &mut events, |e| e.is_readable());

let mut buf = [0; 5];
stream.read(&mut buf);
expect_events(&mut poll, &mut events, |e| e.is_readable());

will panic across all platforms, because (my notion of) a "small" read alone is not enough to reset the readable interest.

@sullivan-sean
Copy link
Author

Since then I've opened #1611 to reconsider this. I think that issue is about the same thing as this issue.

Sorry I missed this entirely -- this seems to be a very similar issue so we can close this and move the discussion there if you'd like

@Thomasdezeeuw
Copy link
Collaborator

Yes let's move it to #1611.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants