Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upLinux OsIpcReceiverSet - Switch to use mio #94
Conversation
|
Review status: 0 of 1 files reviewed at latest revision, 2 unresolved discussions. src/platform/linux/mod.rs, line 418 [r1] (raw file):
Wasn't a huge fan of having to track all of the fd's being watched, but AFAIK since src/platform/linux/mod.rs, line 435 [r1] (raw file):
It would be reasonably easy to switch from using a slice to a dynamically sized vector. I'm not a huge fan of having a Comments from Reviewable |
ccf98d8
to
9719339
|
Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed. src/platform/linux/mod.rs, line 418 [r1] (raw file):
|
|
I'd like to point out that although the platform backend is named I wonder whether there is something specifically motivating this change; or it just looked like it might be a good idea in general? Also, when doing performance optimisation, I suggest adding benchmark tests to keep track of actual results... |
|
Great points. So actually the reason why I opened this PR, was because I started porting this to FreeBSD for servo/servo#11625 and was going to use Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed. src/platform/linux/mod.rs, line 418 [r1] (raw file):
|
|
Thanks for the comments! If you disagree with the Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed. Comments from Reviewable |
|
Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed. src/platform/linux/mod.rs, line 435 [r1] (raw file):
|
|
I do not exactly disagree with the More importantly, I think it is really important to find a sane approach for handling such minor variations in general. I can't speak for @pcwalton and others -- but I for my part would be very sad if we end up with a (Or a |
notriddle
commented
Aug 9, 2016
|
Call me insane, but why can't we just use mio with UNIX sockets? |
|
@notriddle can you elaborate please? Where exactly would |
notriddle
commented
Aug 9, 2016
|
mio provides an abstraction layer between async IO apis, such as epoll and kqueue. |
|
@notriddle that might work I guess... (Though the Can't tell for sure, as most of |
|
After taking a quick peek at The Take this with a grain of salt, as this is my first or second PR to ipc. |
|
Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions. src/platform/linux/mod.rs, line 435 [r1] (raw file):
|
|
@danlrobertson well, in general, if there is an existing abstraction we can use, we probably should -- especially if it involves messy things like platform-specific and unsafe code... The problem is really just that from a quick glance at the |
|
Added some benchmarks for
I repeated it a few times and the results were consistent. Please double check my benchmark code... This was the first time I've written benchmarks in rust. Also I'm really curious if others get similar results. If you test it out on your boxen, I'd be interested to know your results. |
045daba
to
bf5f767
|
I just created a branch here attempting to use |
| @@ -170,3 +170,51 @@ fn size_22_4m(b: &mut test::Bencher) { | |||
| fn size_23_8m(b: &mut test::Bencher) { | |||
| bench_size(b, 8 * 1024 * 1024); | |||
| } | |||
|
|
|||
| macro_rules! gen_poll_test { | |||
This comment has been minimized.
This comment has been minimized.
antrik
Aug 20, 2016
Contributor
This is what I originally did for the other benchmark tests -- but @pcwalton didn't like the use of macros here...
This comment has been minimized.
This comment has been minimized.
| for result in rx_set.select().unwrap().into_iter() { | ||
| let (_, received_data) = result.unwrap(); | ||
| let received_string: String = received_data.to().unwrap(); | ||
| assert_eq!(received_string, "Just a flesh wound"); |
This comment has been minimized.
This comment has been minimized.
antrik
Aug 20, 2016
Contributor
I wouldn't do that in a benchmark test. It might have a pretty significant effect on the result...
This comment has been minimized.
This comment has been minimized.
antrik
Aug 20, 2016
Contributor
In fact -- as much as I like the quote :-) -- I'd probably use a channel of type (), to minimise the overhead of anything but the select() itself...
This comment has been minimized.
This comment has been minimized.
dlrobertson
Aug 20, 2016
Author
Collaborator
Good point. I updated the type and it did help. Thanks! the results are more like what I expected. I'll try mio again.
|
|
||
| gen_poll_test!{bench_one_percent, 1, 100} | ||
| gen_poll_test!{bench_fifty_percent, 50, 100} | ||
| gen_poll_test!{bench_one_hundred_percent, 100, 100} |
This comment has been minimized.
This comment has been minimized.
antrik
Aug 20, 2016
Contributor
I'm sure we could get more meaningful cases here: while the 100% one is useful to check the extreme end, the 50% one really doesn't tell us much. I'd rather go for 2 and 5 or something like that, which are way more likely in the real world I suspect...
Also, tests with smaller sets would be useful -- including the extreme (but not unlikely) case of just a single channel; and maybe 5 and 20 or so for likely real world cases.
(If you are really ambitious, you could add some instrumentation, to get an idea of the usage patterns actually coming up in Servo, so we don't have to rely on guesswork... I must admit though that I would only do that myself if I have absolutely no idea what might be realistic :-) )
On a related note, I guess it might be useful to also have some tests for the costs of creating/manipulating/destroying the set...
This comment has been minimized.
This comment has been minimized.
dlrobertson
Aug 20, 2016
Author
Collaborator
Also, tests with smaller sets would be useful -- including the extreme (but not unlikely) case of just a single channel; and maybe 5 and 20 or so for likely real world cases.
Ah true. I'll remove the 50% case and make a few more reasonable cases. I wanted to have at least the 1% for when we expect epoll to perform way better than poll and 100% where we expect it to be a bit closer. After I push those changes I'll look into adding bench marks for the destruction of the sets.
|
Whoops, forgot that you were using Reviewable here... Hope you don't mind. I'm not sure what to make of the disappointing |
bf5f767
to
5754013
|
So after the removal of the assert the results are more in line with what is expected. mio performs better than the original for the 1% case, but not as good as using epoll directly original
mio
epoll
AFAIK mio will not perform better than directly using epoll for our use case due to the way mio wraps epoll. It does use Note: using mio does perform better than our current implementation, so IMO |
5754013
to
b720d67
Linux OsIpcReceiverSet - Switch to use mio Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's. Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome! <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94) <!-- Reviewable:end -->
|
|
|
I guess another retry is in order? Or is this CI issue more permanent?... |
|
@bors-servo retry |
Linux OsIpcReceiverSet - Switch to use mio Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's. Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome! <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94) <!-- Reviewable:end -->
|
|
notriddle
commented
Dec 2, 2016
|
@bors-servo retry |
Linux OsIpcReceiverSet - Switch to use mio Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's. Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome! <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94) <!-- Reviewable:end -->
|
|
notriddle
commented
Dec 2, 2016
|
notriddle
commented
Dec 2, 2016
|
@bors-servo retry |
Linux OsIpcReceiverSet - Switch to use mio Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's. Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome! <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94) <!-- Reviewable:end -->
|
|
Switch from using poll to using mio for the Unix implementation of OSIpcReceiverSet. The use of mio should result in better performance when there are a larger number of watched descriptors.
|
@bors-servo r=pcwalton,antrik,emilio |
|
|
…emilio Linux OsIpcReceiverSet - Switch to use mio Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's. Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome! <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94) <!-- Reviewable:end -->
|
|
|
\o/ Thanks for all the comments and critiques in the reviews. |
|
Thank you for doing this, and for the patience to get this landed! |
|
Such a big change should have been tested on Servo. |
dlrobertson commentedAug 7, 2016
•
edited
Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's.
Side note: An edge-triggered use of
epollwould be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome!This change is