Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remaining wepoll items. #1041

Closed
6 of 16 tasks
carllerche opened this issue Jul 19, 2019 · 12 comments
Closed
6 of 16 tasks

Remaining wepoll items. #1041

carllerche opened this issue Jul 19, 2019 · 12 comments
Assignees
Labels
windows Related to the Windows OS.
Milestone

Comments

@carllerche
Copy link
Member

carllerche commented Jul 19, 2019

Meta issue tracking remaining work to do on the wepoll windows implementation. This description will be updated over time.

Original PR: #1034. Original issue #1024.

@Thomasdezeeuw
Copy link
Collaborator

  • Ensure all old Windows related issues are fixed with the new implementation.

@carllerche
Copy link
Member Author

@Thomasdezeeuw I added your item to the OP. Do you have permission to edit the OP directly?

@Thomasdezeeuw
Copy link
Collaborator

Yes, but didn't know that.

@carllerche
Copy link
Member Author

@Thomasdezeeuw no worries 👍

@PerfectLaugh
Copy link
Contributor

about memory leak on #1042

@Thomasdezeeuw Thomasdezeeuw added the windows Related to the Windows OS. label Jul 20, 2019
@Thomasdezeeuw Thomasdezeeuw changed the title windows: remaining wepoll items. Remaining wepoll items. Jul 20, 2019
@PerfectLaugh
Copy link
Contributor

And we probably need to make a notice that the socket can NEVER be used in other place simultaneously with Mio. Because we simulated ET behavior by intercepting all operations that users could do.
If they use the socket in the place where Mio cannot observe (like RawSocket), the polling behavior will be undefined.

@piscisaureus
Copy link
Contributor

I would like to add: the current locking scheme is too granular.

It makes little sense to use separate locks for the update_queue and the iocp HANDLE; it's likely to hurt performance and introduce subtle bugs. One example I could find:

self.update_sockets_events()?;
self.active_poll_count.fetch_add(1, Ordering::SeqCst);

        self.update_sockets_events()?;

        self.active_poll_count.fetch_add(1, Ordering::SeqCst);

^-- By calling update_socket_events() before incrementing active_poll_count, it's possible that another thread doesn't call update_socket_events() (because it looks like there are no active polls) when it should (because there actually is another thread that's about to block).

This is just an example, which could be easily fixed by reversing the two operations. But in general I think it'd be better to use a single lock to protect both the completion port and the associated state. Then you release this lock just before calling GetQueuedCompletionStatusEx() and immediately acquire it again right after GetQueuedCompletionStatusEx() returns.

This might require integrating parts of miow into mio, which seems like a good idea either way.

@PerfectLaugh
Copy link
Contributor

I would like to add: the current locking scheme is too granular.

It makes little sense to use separate locks for the update_queue and the iocp HANDLE; it's likely to hurt performance and introduce subtle bugs. One example I could find:

self.update_sockets_events()?;
self.active_poll_count.fetch_add(1, Ordering::SeqCst);

        self.update_sockets_events()?;

        self.active_poll_count.fetch_add(1, Ordering::SeqCst);

^-- By calling update_socket_events() before incrementing active_poll_count, it's possible that another thread doesn't call update_socket_events() (because it looks like there are no active polls) when it should (because there actually is another thread that's about to block).

This is just an example, which could be easily fixed by reversing the two operations. But in general I think it'd be better to use a single lock to protect both the completion port and the associated state. Then you release this lock just before calling GetQueuedCompletionStatusEx() and immediately acquire it again right after GetQueuedCompletionStatusEx() returns.

This might require integrating parts of miow into mio, which seems like a good idea either way.

Is the pull #1055 making sense to you now?

@dtacalau
Copy link
Contributor

@Thomasdezeeuw I'll start working on this issue, please assign it to me.

@dtacalau
Copy link
Contributor

dtacalau commented Dec 4, 2019

Had a discussion with @carllerche about the remaining items from this issue. Here's an update of what has been done so far (all items which are commented out are resolved) and remaining action items (added as a status below each line).

//Dealing with WSAStartup, #1046. (solved in #1029)
benchmark new implementation vs. old.
//TODO: Mio will be benchmarked through Tokio. Will open a Tokio issue to create a benchmark.
ensure wepoll licensing requirements are met (copy license file).
//TODO: No need to copy license, just mention in the README release notes that Windows impl is based on wepoll.
document ownership of pointers being passed around the system.
//TODO: More of a coding practice, not an item for 0.7. Mio Windows should have code comments to document pointers passed to ioctl and back.
ensure memory is not leaked on error here.
//WIP
//Ensure all old Windows related issues are fixed with the new implementation.
A SourceFd (EventedFd) equivalent for Windows.
// Not needed for 0.7, can be pushed to 1.0
Document interaction with RawSocket (do not read / write directly).
//TODO not needed for 0.7, push to 1.0, we need to tell users what they are allowed to do when getting the handle of a TcpListener/TcpStream/UdpSocket through as_raw_handle/socket. Will open a separate issue.
Concurrent polling & registering, see #1053 .
//Wip
Find source of possible polling delay, see #1051 (comment).
//Wip
// Fix issues found by additional tests added in #1070:
// #1073.
// #1074.
// #1078.
// #1079.
// #1080.

@dtacalau
Copy link
Contributor

dtacalau commented Dec 6, 2019

//Dealing with WSAStartup, #1046. (solved in #1029)
//benchmark new implementation vs. old. tokio-rs/tokio#1918
//ensure wepoll licensing requirements are met (copy license file). #1188
//document ownership of pointers being passed around the system. #1185
//ensure memory is not leaked on error here. #1190
//Ensure all old Windows related issues are fixed with the new implementation.
// A SourceFd (EventedFd) equivalent for Windows. #1187
//Document interaction with RawSocket (do not read / write directly). #1186
//Concurrent polling & registering, see #1053 .
//Find source of possible polling delay, see #1051 (comment). #1189
// Fix issues found by additional tests added in #1070:
// #1073.
// #1074.
// #1078.
// #1079.
// #1080.

@dtacalau
Copy link
Contributor

dtacalau commented Dec 6, 2019

@Thomasdezeeuw this issue can be closed. All items have either been resolved or are tracked by other opened issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
windows Related to the Windows OS.
Projects
None yet
Development

No branches or pull requests

5 participants