Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linux OsIpcReceiverSet - Switch to use mio #94

Merged
merged 2 commits into from Dec 25, 2016

Conversation

@dlrobertson
Copy link
Collaborator

dlrobertson commented Aug 7, 2016

Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's.

Side note: An edge-triggered use of epoll would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome!


This change is Reviewable

@dlrobertson dlrobertson force-pushed the dlrobertson:switch_to_epoll branch from 798165b to 6ed6eb6 Aug 7, 2016
@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Aug 7, 2016

Review status: 0 of 1 files reviewed at latest revision, 2 unresolved discussions.


src/platform/linux/mod.rs, line 418 [r1] (raw file):

    pub fn add(&mut self, receiver: OsIpcReceiver) -> Result<i64,UnixError> {
        let fd = receiver.consume_fd();
        self.allfds.push(fd);

Wasn't a huge fan of having to track all of the fd's being watched, but AFAIK since OsIpcReceiver Drops at the end of this function, we need to dup the fd.


src/platform/linux/mod.rs, line 435 [r1] (raw file):

        };
        let nfds = unsafe {
            epoll_wait(self.epollfd, events.as_mut_ptr(), MAX_EVENTS, -1) as usize

It would be reasonably easy to switch from using a slice to a dynamically sized vector. I'm not a huge fan of having a MAX_EVENTS chunk size for dealing with events.


Comments from Reviewable

@dlrobertson dlrobertson force-pushed the dlrobertson:switch_to_epoll branch 2 times, most recently from ccf98d8 to 9719339 Aug 8, 2016
@antrik
Copy link
Contributor

antrik commented Aug 8, 2016

Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed.


src/platform/linux/mod.rs, line 418 [r1] (raw file):

Previously, danlrobertson (Dan Robertson) wrote…

Wasn't a huge fan of having to track all of the fd's being watched, but AFAIK since OsIpcReceiver Drops at the end of this function, we need to dup the fd.

Well, strictly speaking, we do not actually have to `dup()` it; we could prevent the drop instead... However, the actual tracking is necessary, as semantically, the FD is moved into the select structure.

I actually have a pending PR for avoiding the unnecessary dup() here (and for a few similar cases); but I decided to postpone it in favour of investigating the possibility of introducing a wrapper type for FDs, which should enable handling these situations in a more robust and natural fashion...

To keep changes minimal, I'd say it's fine for your PR to preserve the existing approach.


src/platform/linux/mod.rs, line 435 [r1] (raw file):

Previously, danlrobertson (Dan Robertson) wrote…

It would be reasonably easy to switch from using a slice to a dynamically sized vector. I'm not a huge fan of having a MAX_EVENTS chunk size for dealing with events.

I don't see how you want to avoid the chunking here: I believe it's inherent in the `epoll` API?... (To make it more transparent, I guess you _could_ implement a loop that tries to catch all events before returning -- but I'm not sure it's worthwhile, or doesn't come with undesired side effects...)

In the name of defensive programming, I would however indeed suggest using a Vec here: then doing a set_len() after the epoll_wait(); and wrapping this entire part in one unsafe{} block. The following code can then simply iterate over the Vec in a perfectly safe fashion. The way it is now, the unsafety spills out into the surrounding code: it relies upon the "safe" code to handle things correctly in order to avoid undefined behaviour, which is very very bad... (Most notably, if epoll_event ever were to grow a Drop implementation for some reason, hell would break loose.)


src/platform/linux/mod.rs, line 399 [r2] (raw file):

        unsafe {
            for fd in self.allfds.iter() {
                let result = libc::close(*fd);

I'm still a bit of a noob in Rust, so I hope this isn't a silly question: but why are you explicitly dereferencing fd here? I don't think that's standard practice in a situation like this?...


src/platform/linux/mod.rs, line 461 [r2] (raw file):

                    Err(err) => return Err(err),
                }
            } else if evt.events & (EPOLLERR | EPOLLHUP) as u32 > 0 {

I'm not sure it's actually correct to handle this here; and in my understanding, the behaviour doesn't differ between poll() and epoll -- so this is actually an unrelated change. I suggest putting it in a separate PR, along with new test cases to cover these situations.


src/platform/linux/mod.rs, line 474 [r2] (raw file):

                    libc::close(*hangup);
                }
            }

I don't understand why you added this here: am I missing some other change that makes it necessary? Or do you think this is a bug in the original implementation? (In which case I'd suggest a separate PR, or at the very least a separate commit...)


Comments from Reviewable

@antrik
Copy link
Contributor

antrik commented Aug 8, 2016

I'd like to point out that although the platform backend is named linux, thus far it has really just been a generic UNIX implementation, which should essentially work on any POSIX system. (Minus some small wrinkles probably...) Switching to an actually Linux-specific interface will make it more painful to port to other systems later on -- I think we first need to make plans how to handle this going forward...

I wonder whether there is something specifically motivating this change; or it just looked like it might be a good idea in general?

Also, when doing performance optimisation, I suggest adding benchmark tests to keep track of actual results...

@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Aug 8, 2016

Great points. So actually the reason why I opened this PR, was because I started porting this to FreeBSD for servo/servo#11625 and was going to use kqueue, and noticed linux didn't use epoll, and I figured that would be a usefull change as well. There enough differences (e.g. sockaddr_un and a few other type differences) that some refractoring was going to need to happen anyways. As a whole, I'd have to say it isn't necessary though... Just thought it might be a good idea.


Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed.


src/platform/linux/mod.rs, line 418 [r1] (raw file):

Previously, antrik (Olaf Buddenhagen) wrote…

Well, strictly speaking, we do not actually have to dup() it; we could prevent the drop instead... However, the actual tracking is necessary, as semantically, the FD is moved into the select structure.

I actually have a pending PR for avoiding the unnecessary dup() here (and for a few similar cases); but I decided to postpone it in favour of investigating the possibility of introducing a wrapper type for FDs, which should enable handling these situations in a more robust and natural fashion...

To keep changes minimal, I'd say it's fine for your PR to preserve the existing approach.

Awesome! That would be really great

src/platform/linux/mod.rs, line 435 [r1] (raw file):

Previously, antrik (Olaf Buddenhagen) wrote…

I don't see how you want to avoid the chunking here: I believe it's inherent in the epoll API?... (To make it more transparent, I guess you could implement a loop that tries to catch all events before returning -- but I'm not sure it's worthwhile, or doesn't come with undesired side effects...)

In the name of defensive programming, I would however indeed suggest using a Vec here: then doing a set_len() after the epoll_wait(); and wrapping this entire part in one unsafe{} block. The following code can then simply iterate over the Vec in a perfectly safe fashion. The way it is now, the unsafety spills out into the surrounding code: it relies upon the "safe" code to handle things correctly in order to avoid undefined behaviour, which is very very bad... (Most notably, if epoll_event ever were to grow a Drop implementation for some reason, hell would break loose.)

So you do need some sort of chunking, but the chunking can be more "configurable". There are a few ways this could work. But they're probably not worthwhile (especially no. 2)
  1. As opposed to using MAX_EVENTS you would have some configurable size x in the constructor, so then as opposed to creating a slice you create a vector of size x and then use epoll_wait(epollfd, vec.as_mut_ptr(), vec.len())

  2. Use penalties. Again use a config value x. If nfds is greater than or equal to threashold(x) increase x with MIN(MAXIMUM_ALLOWED, ++x) or if it is less than threashold(x) decrease it with MAX(MINIMUM_ALLOWED, --x.

Ah! Good point. Thanks for the tip!


src/platform/linux/mod.rs, line 399 [r2] (raw file):

Previously, antrik (Olaf Buddenhagen) wrote…

I'm still a bit of a noob in Rust, so I hope this isn't a silly question: but why are you explicitly dereferencing fd here? I don't think that's standard practice in a situation like this?...

Good question... I'm a bit of a Rust noob as well. `allfds` can't be moved out of the borrowed context, and as a result fd is a `&i32`. While `libc::close` requires a standard int

src/platform/linux/mod.rs, line 461 [r2] (raw file):

Previously, antrik (Olaf Buddenhagen) wrote…

I'm not sure it's actually correct to handle this here; and in my understanding, the behaviour doesn't differ between poll() and epoll -- so this is actually an unrelated change. I suggest putting it in a separate PR, along with new test cases to cover these situations.

Np. I'll remove it and submit a separate PR if necessary.

src/platform/linux/mod.rs, line 474 [r2] (raw file):

Previously, antrik (Olaf Buddenhagen) wrote…

I don't understand why you added this here: am I missing some other change that makes it necessary? Or do you think this is a bug in the original implementation? (In which case I'd suggest a separate PR, or at the very least a separate commit...)

Ah, actually the `libc::close` is a mistake (I removed the wrong line last night... I'm pretty sure that's why the build is failing?), but the `epoll_ctl` is necessary, so that we stop watching the fd after it hits a hangup. Let me know if I've misunderstood something

Comments from Reviewable

@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Aug 8, 2016

Thanks for the comments! If you disagree with the kqueue/epoll approach or have more questions please let me know.


Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed.


Comments from Reviewable

@antrik
Copy link
Contributor

antrik commented Aug 9, 2016

Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed.


src/platform/linux/mod.rs, line 435 [r1] (raw file):

Previously, danlrobertson (Dan Robertson) wrote…

So you do need some sort of chunking, but the chunking can be more "configurable". There are a few ways this could work. But they're probably not worthwhile (especially no. 2)

  1. As opposed to using MAX_EVENTS you would have some configurable size x in the constructor, so then as opposed to creating a slice you create a vector of size x and then use epoll_wait(epollfd, vec.as_mut_ptr(), vec.len())

  2. Use penalties. Again use a config value x. If nfds is greater than or equal to threashold(x) increase x with MIN(MAXIMUM_ALLOWED, ++x) or if it is less than threashold(x) decrease it with MAX(MINIMUM_ALLOWED, --x.

Ah! Good point. Thanks for the tip!

I'm just not convinced it's worthwhile. It might indeed be perfectly fine to go with `MAX_EVENTS` set to 1... This would in fact be more intuitive for something called `select()` I'd say -- I suspect it just behaves differently in the original implementation because `poll()` kinda forces this?...

I just realised though that the select won't yield any actual results (I think) if all the events we get from epoll_wait() are of a "wrong" type... Which is mostly likely true of the poll() implementation as well?... So many intricate details to consider :-(


src/platform/linux/mod.rs, line 399 [r2] (raw file):

Previously, danlrobertson (Dan Robertson) wrote…

Good question... I'm a bit of a Rust noob as well. allfds can't be moved out of the borrowed context, and as a result fd is a &i32. While libc::close requires a standard int

Well, I am aware that the iterator returns `&i32` values -- it's just that since this is a function argument, it would be handled transparently by an automatic deref coercion. (Unless I am very confused...) And as far as I am aware, we do not generally spell out dereferencing if it's covered by coercion?

src/platform/linux/mod.rs, line 474 [r2] (raw file):

Previously, danlrobertson (Dan Robertson) wrote…

Ah, actually the libc::close is a mistake (I removed the wrong line last night... I'm pretty sure that's why the build is failing?), but the epoll_ctl is necessary, so that we stop watching the fd after it hits a hangup. Let me know if I've misunderstood something

Yeah, that sounds right -- an `epoll_ctl()` here would totally make sense.

(Closing the descriptor actually removes it from the set automatically AIUI -- but only if there are no other outstanding FDs referencing the same underlying open(); and also, it would potentially discard pending messages, which we don't want...)


Comments from Reviewable

@antrik
Copy link
Contributor

antrik commented Aug 9, 2016

I do not exactly disagree with the epoll/kqueue approach -- I just wonder whether the extra complexity is justified here...

More importantly, I think it is really important to find a sane approach for handling such minor variations in general. I can't speak for @pcwalton and others -- but I for my part would be very sad if we end up with a FreeBSD backend that is just a copy of the Linux one with only some small changes, and another one for generic UNIX...

(Or a Hurd backend that is mostly a copy of MacOS one, for that matter.)

@notriddle
Copy link

notriddle commented Aug 9, 2016

Call me insane, but why can't we just use mio with UNIX sockets?

@antrik
Copy link
Contributor

antrik commented Aug 9, 2016

@notriddle can you elaborate please? Where exactly would mio come in?

@notriddle
Copy link

notriddle commented Aug 9, 2016

mio provides an abstraction layer between async IO apis, such as epoll and kqueue.

@dlrobertson dlrobertson force-pushed the dlrobertson:switch_to_epoll branch from 9719339 to 6c0e29b Aug 9, 2016
@antrik
Copy link
Contributor

antrik commented Aug 9, 2016

@notriddle that might work I guess... (Though the Poll struct actually seems more relevant than the unix module for this.)

Can't tell for sure, as most of mio doesn't appear to have actual documentation?...

@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Aug 9, 2016

After taking a quick peek at mio I aggree.

The Poll struct seems quite relevant, but to be honest. I don't think the poll portion is going to be the hardest part on the road to supporting more OSes. For example, this is my current progress on moving ipc-channel to freebsd https://github.com/danlrobertson/ipc-channel/tree/support-freebsd. The current implementation of select with poll works, but a huge number of other tests fail.

Take this with a grain of salt, as this is my first or second PR to ipc.

@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Aug 9, 2016

Review status: 0 of 1 files reviewed at latest revision, 5 unresolved discussions.


src/platform/linux/mod.rs, line 435 [r1] (raw file):

Previously, antrik (Olaf Buddenhagen) wrote…

I'm just not convinced it's worthwhile. It might indeed be perfectly fine to go with MAX_EVENTS set to 1... This would in fact be more intuitive for something called select() I'd say -- I suspect it just behaves differently in the original implementation because poll() kinda forces this?...

I just realised though that the select won't yield any actual results (I think) if all the events we get from epoll_wait() are of a "wrong" type... Which is mostly likely true of the poll() implementation as well?... So many intricate details to consider :-(

I agree. If it is decided to use `epoll/kqueue` a more complex implementation probably isn't worth it.

Ah! yes, that is an astute observation. For both implementations, if none of the results are POLLIN the returned vec will be empty


src/platform/linux/mod.rs, line 399 [r2] (raw file):

Previously, antrik (Olaf Buddenhagen) wrote…

Well, I am aware that the iterator returns &i32 values -- it's just that since this is a function argument, it would be handled transparently by an automatic deref coercion. (Unless I am very confused...) And as far as I am aware, we do not generally spell out dereferencing if it's covered by coercion?

I expected that as well, but it would not compile without the Deref

src/platform/linux/mod.rs, line 474 [r2] (raw file):

Previously, antrik (Olaf Buddenhagen) wrote…

Yeah, that sounds right -- an epoll_ctl() here would totally make sense.

(Closing the descriptor actually removes it from the set automatically AIUI -- but only if there are no other outstanding FDs referencing the same underlying open(); and also, it would potentially discard pending messages, which we don't want...)

Yup. Seems to have fixed the build. Thanks!

Comments from Reviewable

@antrik
Copy link
Contributor

antrik commented Aug 9, 2016

@danlrobertson well, in general, if there is an existing abstraction we can use, we probably should -- especially if it involves messy things like platform-specific and unsafe code... The problem is really just that from a quick glance at the mio "documentation" -- which for the most part is just a cross-reference really -- I can't tell whether it is likely to work or not. But it's certainly worth some investigation...

@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Aug 10, 2016

Added some benchmarks for poll. The first results I got for the older version and the newer version were.

$ git checkout switch_to_epoll
$ cargo bench poll
   Compiling ipc-channel v0.5.1 (file:///home/drobertson/git/servo/ipc-channel)
       Finished release [optimized] target(s) in 68.47 secs
            Running target/release/bench-cd0b4141c4f39269

            running 3 tests
            test poll::bench_fifty_percent       ... bench:     302,979 ns/iter (+/- 1,959)
            test poll::bench_one_hundred_percent ... bench:     604,051 ns/iter (+/- 4,740)
            test poll::bench_one_percent         ... bench:       5,383 ns/iter (+/- 129)

            test result: ok. 0 passed; 0 failed; 0 ignored; 3 measured

$ git checkout HEAD~1
$ cargo bench poll
   Compiling ipc-channel v0.5.1 (file:///home/drobertson/git/servo/ipc-channel)
       Finished release [optimized] target(s) in 8.99 secs
            Running target/release/bench-cd0b4141c4f39269

            running 3 tests
            test poll::bench_fifty_percent       ... bench:     487,272 ns/iter (+/- 3,168)
            test poll::bench_one_hundred_percent ... bench:   1,146,511 ns/iter (+/- 11,361)
            test poll::bench_one_percent         ... bench:      13,372 ns/iter (+/- 489)

            test result: ok. 0 passed; 0 failed; 0 ignored; 3 measured

I repeated it a few times and the results were consistent. Please double check my benchmark code... This was the first time I've written benchmarks in rust. Also I'm really curious if others get similar results. If you test it out on your boxen, I'd be interested to know your results.

@dlrobertson dlrobertson force-pushed the dlrobertson:switch_to_epoll branch 2 times, most recently from 045daba to bf5f767 Aug 10, 2016
@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Aug 11, 2016

I just created a branch here attempting to use mio instead. Interestingly, the performance on the benchmark tests I just wrote were almost identical to the implementation with regular poll. Note: if we were to use mio we would need to use 0.6.x.

@@ -170,3 +170,51 @@ fn size_22_4m(b: &mut test::Bencher) {
fn size_23_8m(b: &mut test::Bencher) {
bench_size(b, 8 * 1024 * 1024);
}

macro_rules! gen_poll_test {

This comment has been minimized.

@antrik

antrik Aug 20, 2016

Contributor

This is what I originally did for the other benchmark tests -- but @pcwalton didn't like the use of macros here...

This comment has been minimized.

@dlrobertson

dlrobertson Aug 20, 2016

Author Collaborator

Updated to use a function instead

for result in rx_set.select().unwrap().into_iter() {
let (_, received_data) = result.unwrap();
let received_string: String = received_data.to().unwrap();
assert_eq!(received_string, "Just a flesh wound");

This comment has been minimized.

@antrik

antrik Aug 20, 2016

Contributor

I wouldn't do that in a benchmark test. It might have a pretty significant effect on the result...

This comment has been minimized.

@antrik

antrik Aug 20, 2016

Contributor

In fact -- as much as I like the quote :-) -- I'd probably use a channel of type (), to minimise the overhead of anything but the select() itself...

This comment has been minimized.

@dlrobertson

dlrobertson Aug 20, 2016

Author Collaborator

Good point. I updated the type and it did help. Thanks! the results are more like what I expected. I'll try mio again.


gen_poll_test!{bench_one_percent, 1, 100}
gen_poll_test!{bench_fifty_percent, 50, 100}
gen_poll_test!{bench_one_hundred_percent, 100, 100}

This comment has been minimized.

@antrik

antrik Aug 20, 2016

Contributor

I'm sure we could get more meaningful cases here: while the 100% one is useful to check the extreme end, the 50% one really doesn't tell us much. I'd rather go for 2 and 5 or something like that, which are way more likely in the real world I suspect...

Also, tests with smaller sets would be useful -- including the extreme (but not unlikely) case of just a single channel; and maybe 5 and 20 or so for likely real world cases.

(If you are really ambitious, you could add some instrumentation, to get an idea of the usage patterns actually coming up in Servo, so we don't have to rely on guesswork... I must admit though that I would only do that myself if I have absolutely no idea what might be realistic :-) )

On a related note, I guess it might be useful to also have some tests for the costs of creating/manipulating/destroying the set...

This comment has been minimized.

@dlrobertson

dlrobertson Aug 20, 2016

Author Collaborator

Also, tests with smaller sets would be useful -- including the extreme (but not unlikely) case of just a single channel; and maybe 5 and 20 or so for likely real world cases.

Ah true. I'll remove the 50% case and make a few more reasonable cases. I wanted to have at least the 1% for when we expect epoll to perform way better than poll and 100% where we expect it to be a bit closer. After I push those changes I'll look into adding bench marks for the destruction of the sets.

@antrik
Copy link
Contributor

antrik commented Aug 20, 2016

Whoops, forgot that you were using Reviewable here... Hope you don't mind.

I'm not sure what to make of the disappointing mio results. Do you have any idea what causes the overhead there? Is it even using epoll?...

@dlrobertson dlrobertson force-pushed the dlrobertson:switch_to_epoll branch from bf5f767 to 5754013 Aug 20, 2016
@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Aug 20, 2016

So after the removal of the assert the results are more in line with what is expected. mio performs better than the original for the 1% case, but not as good as using epoll directly

original

running 3 tests
test poll::bench_fifty_percent       ... bench:     330,639 ns/iter (+/- 6,048)
test poll::bench_one_hundred_percent ... bench:     747,497 ns/iter (+/- 10,552)
test poll::bench_one_percent         ... bench:      13,407 ns/iter (+/- 216)

mio

running 3 tests
test poll::bench_fifty_percent       ... bench:     305,776 ns/iter (+/- 3,245)
test poll::bench_one_hundred_percent ... bench:     717,213 ns/iter (+/- 8,743)
test poll::bench_one_percent         ... bench:       4,931 ns/iter (+/- 240)

epoll

running 3 tests
test poll::bench_fifty_percent       ... bench:     201,878 ns/iter (+/- 3,569)
test poll::bench_one_hundred_percent ... bench:     403,554 ns/iter (+/- 5,512)
test poll::bench_one_percent         ... bench:       4,855 ns/iter (+/- 134)

AFAIK mio will not perform better than directly using epoll for our use case due to the way mio wraps epoll. It does use kqueue/epoll, but the overhead of making their wrapper fit our current design is probably why that implementation performs worse. E.g. getting the file descriptor from the event is super easy and efficient when directly using epoll/poll, but dificult and inefficient when using mio (I could have missed an optimization in my implementation, so this could be my fault).

Note: using mio does perform better than our current implementation, so IMO mio should not be completely disregarded at this point

bors-servo added a commit that referenced this pull request Nov 29, 2016
Linux OsIpcReceiverSet - Switch to use mio

Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's.

Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome!

<!-- Reviewable:start -->

---

This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94)

<!-- Reviewable:end -->
@bors-servo
Copy link
Contributor

bors-servo commented Nov 29, 2016

💔 Test failed - status-travis

@antrik
Copy link
Contributor

antrik commented Dec 1, 2016

I guess another retry is in order? Or is this CI issue more permanent?...

@emilio
Copy link
Member

emilio commented Dec 1, 2016

@bors-servo retry

@bors-servo
Copy link
Contributor

bors-servo commented Dec 1, 2016

Testing commit 78891e7 with merge 29a912b...

bors-servo added a commit that referenced this pull request Dec 1, 2016
Linux OsIpcReceiverSet - Switch to use mio

Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's.

Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome!

<!-- Reviewable:start -->

---

This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94)

<!-- Reviewable:end -->
@bors-servo
Copy link
Contributor

bors-servo commented Dec 1, 2016

💔 Test failed - status-travis

@notriddle
Copy link

notriddle commented Dec 2, 2016

@bors-servo retry

@bors-servo
Copy link
Contributor

bors-servo commented Dec 2, 2016

Testing commit 78891e7 with merge 703a044...

bors-servo added a commit that referenced this pull request Dec 2, 2016
Linux OsIpcReceiverSet - Switch to use mio

Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's.

Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome!

<!-- Reviewable:start -->

---

This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94)

<!-- Reviewable:end -->
@bors-servo
Copy link
Contributor

bors-servo commented Dec 2, 2016

💔 Test failed - status-appveyor

@notriddle
Copy link

notriddle commented Dec 2, 2016

Build started
Environment variable name or value is too long.
@notriddle
Copy link

notriddle commented Dec 2, 2016

@bors-servo retry

@bors-servo
Copy link
Contributor

bors-servo commented Dec 2, 2016

Testing commit 78891e7 with merge dda6095...

bors-servo added a commit that referenced this pull request Dec 2, 2016
Linux OsIpcReceiverSet - Switch to use mio

Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's.

Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome!

<!-- Reviewable:start -->

---

This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94)

<!-- Reviewable:end -->
@bors-servo
Copy link
Contributor

bors-servo commented Dec 2, 2016

💔 Test failed - status-appveyor

Switch from using poll to using mio for the Unix implementation of
OSIpcReceiverSet. The use of mio should result in better performance
when there are a larger number of watched descriptors.
@dlrobertson dlrobertson force-pushed the dlrobertson:switch_to_epoll branch from 78891e7 to f6cefc5 Dec 5, 2016
@emilio
Copy link
Member

emilio commented Dec 25, 2016

@bors-servo r=pcwalton,antrik,emilio

@bors-servo
Copy link
Contributor

bors-servo commented Dec 25, 2016

📌 Commit f6cefc5 has been approved by pcwalton,antrik,emilio

@bors-servo
Copy link
Contributor

bors-servo commented Dec 25, 2016

Testing commit f6cefc5 with merge ddad900...

bors-servo added a commit that referenced this pull request Dec 25, 2016
…emilio

Linux OsIpcReceiverSet - Switch to use mio

Switch from using poll to a level-triggered use of epoll for the Linux OSIpcReceiverSet. The use of epoll should perform better when ther are a large number of watched fd's.

Side note: An edge-triggered use of `epoll` would be very easy here, but to make the transition easier I chose level-triggered. Let me know if you thing edge-triggered would be better. Comments and critiques are welcome!

<!-- Reviewable:start -->

---

This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/ipc-channel/94)

<!-- Reviewable:end -->
@bors-servo
Copy link
Contributor

bors-servo commented Dec 25, 2016

☀️ Test successful - status-appveyor, status-travis

@bors-servo bors-servo merged commit f6cefc5 into servo:master Dec 25, 2016
3 checks passed
3 checks passed
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
homu Test successful
Details
@dlrobertson
Copy link
Collaborator Author

dlrobertson commented Dec 25, 2016

\o/ Thanks for all the comments and critiques in the reviews.

@emilio
Copy link
Member

emilio commented Dec 25, 2016

Thank you for doing this, and for the patience to get this landed!

@nox
Copy link
Member

nox commented Feb 14, 2017

Such a big change should have been tested on Servo.

servo/servo#15537 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

10 participants
You can’t perform that action at this time.