New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flush changelist more frequently #270

Closed
wants to merge 3 commits into
base: master
from

Conversation

Projects
None yet
3 participants
@dwrensha
Contributor

dwrensha commented Sep 13, 2015

This fixes #267.

@dwrensha

This comment has been minimized.

Show comment
Hide comment
@dwrensha

dwrensha Sep 13, 2015

Contributor

kevent() allows the user to register interest in events (via the changelist argument) in the same system call as requesting triggered events (via the eventlist argument). mio attempts to take advantage of this by batching kevent() calls; if there are no more than 1024 registrations, they are deferred until the kevent() call on the next EventLoop::run_once() call. This approach is enticing because it reduces the number of system calls. However, it has problematic semantics. What if an mio client registers interest on a file descriptor and then closes the descriptor before the next run_once() call? What ought to happen is that the client should receive no notifications about that descriptor. What happens in mio's current implementation is that the client gets notified that the (now invalid!) file descriptor is ready. I think kevent() actually reports an error (via EV_ERROR) for these bad descriptors and mio ignores it --- that's another bug, and somewhat orthogonal.

Contributor

dwrensha commented Sep 13, 2015

kevent() allows the user to register interest in events (via the changelist argument) in the same system call as requesting triggered events (via the eventlist argument). mio attempts to take advantage of this by batching kevent() calls; if there are no more than 1024 registrations, they are deferred until the kevent() call on the next EventLoop::run_once() call. This approach is enticing because it reduces the number of system calls. However, it has problematic semantics. What if an mio client registers interest on a file descriptor and then closes the descriptor before the next run_once() call? What ought to happen is that the client should receive no notifications about that descriptor. What happens in mio's current implementation is that the client gets notified that the (now invalid!) file descriptor is ready. I think kevent() actually reports an error (via EV_ERROR) for these bad descriptors and mio ignores it --- that's another bug, and somewhat orthogonal.

@carllerche

This comment has been minimized.

Show comment
Hide comment
@carllerche

carllerche Sep 14, 2015

Owner

Thanks. This change looks very reasonable to me.

I would ❤️ ❤️ if you could take the snippet in #267 and make it a test. This will be extra helpful in making sure windows supports the same semantics.

Owner

carllerche commented Sep 14, 2015

Thanks. This change looks very reasonable to me.

I would ❤️ ❤️ if you could take the snippet in #267 and make it a test. This will be extra helpful in making sure windows supports the same semantics.

@dwrensha

This comment has been minimized.

Show comment
Hide comment
@dwrensha

dwrensha Sep 14, 2015

Contributor

I would ❤️ ❤️ if you could take the snippet in #267 and make it a test.

OK. I'll do that and add it to this pull request.

Contributor

dwrensha commented Sep 14, 2015

I would ❤️ ❤️ if you could take the snippet in #267 and make it a test.

OK. I'll do that and add it to this pull request.

@dwrensha

This comment has been minimized.

Show comment
Hide comment
@dwrensha

dwrensha Sep 14, 2015

Contributor

I've added a test that fails before this change and succeeds afterwards. Unfortunately, it fails on Windows too. :( https://ci.appveyor.com/project/carllerche/mio/build/1.0.42/job/yvvu3sspy95wd176

Contributor

dwrensha commented Sep 14, 2015

I've added a test that fails before this change and succeeds afterwards. Unfortunately, it fails on Windows too. :( https://ci.appveyor.com/project/carllerche/mio/build/1.0.42/job/yvvu3sspy95wd176

@alexcrichton

This comment has been minimized.

Show comment
Hide comment
@alexcrichton

alexcrichton Sep 14, 2015

Contributor

@dwrensha could you try applying this patch to get the test to pass on Windows? I haven't run it locally but I think that'll do the trick.

Contributor

alexcrichton commented Sep 14, 2015

@dwrensha could you try applying this patch to get the test to pass on Windows? I haven't run it locally but I think that'll do the trick.

@alexcrichton

This comment has been minimized.

Show comment
Hide comment
@alexcrichton

alexcrichton Sep 14, 2015

Contributor

Er, actually just verified that it passed on Windows! If you wanna just throw that on this branch then once CI is green this should be good to go @carllerche?

Contributor

alexcrichton commented Sep 14, 2015

Er, actually just verified that it passed on Windows! If you wanna just throw that on this branch then once CI is green this should be good to go @carllerche?

@dwrensha

This comment has been minimized.

Show comment
Hide comment
@dwrensha

dwrensha Sep 14, 2015

Contributor

@alexcrichton: Thanks! I've applied your patch. However, I'm worried that it doesn't cover all cases. Does Windows reuse socket handles? Could it happen that a different socket is using the socket handle by the time run_once() gets called, causing it to slip past your INVALID_SOCKET test?

Contributor

dwrensha commented Sep 14, 2015

@alexcrichton: Thanks! I've applied your patch. However, I'm worried that it doesn't cover all cases. Does Windows reuse socket handles? Could it happen that a different socket is using the socket handle by the time run_once() gets called, causing it to slip past your INVALID_SOCKET test?

@alexcrichton

This comment has been minimized.

Show comment
Hide comment
@alexcrichton

alexcrichton Sep 14, 2015

Contributor

So I wouldn't put it past Windows to reuse SOCKET handles, but could you elaborate on where you think the badness may arise?

To give some background on the current implementation, however: currently each SOCKET is uniquely owned by the corresponding mio type, and when that type is dropped the socket is closed to ensure that all pending I/O is canceled and returns immediately. To do this we replace the current I/O object with FromRawSocket::from_raw_socket(INVALID_HANDLE), which means that the transfer of ownership via the replacement closes the original object, and all future I/O will return an error as the socket is INVALID_HANDLE. There are still allocated resources for the socket, however, (e.g. buffers) which are only free'd once all the pending I/O returns and completes.

So the intention of these checks is to basically never generate an event if the socket has previously known to have been closed. In that sense I'm not quite sure where the reuse of an invalid handle would happen (you're either using your uniquely owned one or an always-invalid one), but I'm curious what you're thinking as well!

Contributor

alexcrichton commented Sep 14, 2015

So I wouldn't put it past Windows to reuse SOCKET handles, but could you elaborate on where you think the badness may arise?

To give some background on the current implementation, however: currently each SOCKET is uniquely owned by the corresponding mio type, and when that type is dropped the socket is closed to ensure that all pending I/O is canceled and returns immediately. To do this we replace the current I/O object with FromRawSocket::from_raw_socket(INVALID_HANDLE), which means that the transfer of ownership via the replacement closes the original object, and all future I/O will return an error as the socket is INVALID_HANDLE. There are still allocated resources for the socket, however, (e.g. buffers) which are only free'd once all the pending I/O returns and completes.

So the intention of these checks is to basically never generate an event if the socket has previously known to have been closed. In that sense I'm not quite sure where the reuse of an invalid handle would happen (you're either using your uniquely owned one or an always-invalid one), but I'm curious what you're thinking as well!

@dwrensha

This comment has been minimized.

Show comment
Hide comment
@dwrensha

dwrensha Sep 15, 2015

Contributor

Oh, I didn't realize that the INVALID_SOCKET value is coming from your implementation rather than from Windows. I am less worried now. :)

Elaboration of what I was thinking: one somewhat-plausible-looking way to deal with #267 in the kqueue() case would be to keep the current batching and to just ignore any "Bad file descriptor" errors. However, the fact that file descriptors and mio tokens can both be reused makes that approach untenable, I think, because you could end up matching events to the wrong I/O objects. I just wanted to make sure that we weren't going to hit similar problems here.

Contributor

dwrensha commented Sep 15, 2015

Oh, I didn't realize that the INVALID_SOCKET value is coming from your implementation rather than from Windows. I am less worried now. :)

Elaboration of what I was thinking: one somewhat-plausible-looking way to deal with #267 in the kqueue() case would be to keep the current batching and to just ignore any "Bad file descriptor" errors. However, the fact that file descriptors and mio tokens can both be reused makes that approach untenable, I think, because you could end up matching events to the wrong I/O objects. I just wanted to make sure that we weren't going to hit similar problems here.

@alexcrichton

This comment has been minimized.

Show comment
Hide comment
@alexcrichton

alexcrichton Sep 15, 2015

Contributor

Ah ok cool, thanks for the clarification!

Merged as 2c3435c

Contributor

alexcrichton commented Sep 15, 2015

Ah ok cool, thanks for the clarification!

Merged as 2c3435c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment