New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tokio integration #72
Comments
|
@antoyo, @zeenix and I spent some time hacking on a proof of concept of tokio integration. We managed to create a basic example client, started hacking on an example server, and then began feeling a bit over our heads w/r/t dbus-rs internals. I hope that, at the least, this work can serve as guidance toward the real solution. Here's the branch: https://github.com/brson/dbus-rs/tree/tokio The entire implementation is contained in tokio.rs. There is a client example and server. The comment at the top of tokio.rs describes the integration strategy, which is derived from tokio-curl. This patch integrates directly into dbus-rs for simplicity, but it would be just as well as a standalone crate. One unfortunate thing about this integration is that it needs to know directly about tokio-core and mio in order to communicate information about the fds that come out of dbus. So it's not completely abstracted from the underlying event loop, and will need specific integration into future tokio event loop implementations. I figure solutions there will shake out in time. Since we didn't make much progress on the server implementation, we just wrote the server example to demonstrate what might be a plausible API. In the client code we took a strategy of encapsulating Connection in TokioConnection; for the server we might similarly encapsulate it in TokioServer, or possibly give TokioConnection a method that returns a server future of some kind, which can then be used to drive an event loop (this seems like it may be more consistent with the existing Connection design). The big thing we got stuck on was understanding the parameterization of FYI, I don't personally expect to continue working on this, but I'm hopeful @antoyo and @zeenix can continue in collaboration with @diwic. |
|
MTFn, MTFnMut and MTSync represent methods that are |
|
@brson Looking at the server example, I see you added an
|
Yes, that was the intent.
So async_method itself does not return a future, it's only the callback provided to async_method that returns a future. That callback can be considered a method call future factory. The idea is that the 'engine' itself calls that callback (factory) to construct a new future, then itself is responsible for scheduling a task based on that future. I cannot say that this is a correct idea, but it felt right at a first pass. |
Right, it's specifically this part I'm wondering about. How do I schedule a task based on a future? |
|
Having read up a little about futures and tasks, I think maybe it will be better for |
|
Ok, so I tried from from the other end, i e, trying to build a tree with async methods. I figured this would be the easier part, since it does not do much fd related stuff. But I haven't come very far yet. I'm stuck on two things:
|
|
@diwic : let signal2 = signal.clone();
sleep_future.and_then(move |_| {
let s = format!("Hello {}!", sender);
let mret = mret.append1(s);
let sig = signal2.msg(&pname, &iname).append1(&*sender);
// Two messages will be returned - one is the method return (and should always be there),
// and in our case we also have a signal we want to send at the same time.
Ok(vec!(mret, sig))
})For your other issue, you should most probably avoid |
Ah, right. It would have been helpful if the compiler could have given a hint about what variable it was complaining about. Fixed now. As for
Okay, I'll started with this (see just committed code). One piece fell in place when I changed the example to convert a TimerError to a MethodErr. I'll continue later, need to go to work now :-) |
|
@diwic: I reported an issue for the hint and it was fixed. For the |
|
As you have perhaps noticed, not much has happened during the last few weeks. This is partially due to other things taking priority (and I expect the same for a few weeks more), but also due to the Tokio architecture being hard to understand for me. I've filed tokio-rs/tokio#9 now, let's see if that gives something. |
|
Ok, so with some help in that other thread, I was able to get somewhere, so I have somewhat of a working client. Here's my test case: Feel free to review / try it out etc. I think the main differences between mine and brson's implementations are:
|
|
@diwic cool. :) |
|
This tokio integration would be tremendously helpful! I'd like to refactor my crate bulletinboard to make it work with tokio. Do you already have a rough idea when this feature will land in your dbus crate? |
|
@manuels It will not land in the dbus crate; dbus-tokio will be a separate crate. The slow progress last few weeks is basically me prioritizing other things. Those things will hopefully calm down a week from now but it's hard to know for sure. Also expect redesigns as I learn more of Tokio. |
|
Thanks for the feedback, @diwic! |
|
Good news! I finally got something up and working enough, that I have just made the first release of dbus-tokio. What should be working is:
So @manuels , @zeenix , @antoyo - this is where I hand over to you to try it out and see what you like and what you don't like, if it's working or if it's buggy etc... (and @albel727 might be interested as well). |
|
Looks great. The interface looks quite simple. |
|
I've taken a look at the implementation, and I'm starting to think that implementing a tokio wrapper around libdbus isn't possible in a clean and reliable way, at least not with The problem I see is that
See socket_handle_watch() and In the current implementation, Well, it's a bit more complicated than this, since everything that, e.g. calls _dbus_connection_do_iteration_unlocked(), like So, if my understanding is correct, if sent/received messages are more than 4096 bytes size on average, current implementation will grind to a halt. I don't see how this can be fixed in general, as none of the dbus functions return whether they encountered I encourage someone to test my guess, since I don't have time for it just now. As for possible workarounds, maybe issuing |
|
@albel727: Do you think it could work with alternative event loops like futures-glib? |
|
It probably could, though I'm not sure how that integrates with tokio, and this is "Tokio integration" issue, after all. Anything with level-based signaling should work. Glib uses The possible workarounds are numerous, though few are elegant. Maybe even something as simple as overriding But first we better make sure that it indeed fails the way I predict it. Then potential fixes can be thought about and tried. |
|
|
|
So I've tested receipt of messages, with a simple server and dbus-test-tool, and yeah, it fails like I expected. fn main() {
let conn = Rc::new(Connection::get_private(::dbus::BusType::Session).unwrap());
//println!("Conn: {}", conn.unique_name());
conn.register_name("org.dbus.tokio.Test", 7).unwrap();
let mut old_cb = conn.replace_message_callback(None).unwrap();
conn.replace_message_callback(Some(Box::new(move |conn, msg| {
old_cb(conn, msg);
true // Never reply to method calls, to suppress "vanished sender" errors.
})));
let mut core = ::tokio_core::reactor::Core::new().unwrap();
let aconn = AConnection::new(conn.clone(), core.handle()).unwrap();
let items: AMessageStream = aconn.messages().unwrap();
let signals = items.for_each(|m| {
println!("Message from {:?} was: {:?}", m.sender(), m);
//println!("{:?}", m.get_items());
Ok(())
});
core.run(signals).unwrap();
}Executing
will only result in receipt ~every second time, because messages fill the socket with twice the amount of bytes than is processed per wakeup. Observing same failure with sending is harder, because unlike reading, which is limited to 2048 bytes per iteration, Simple |
|
Speaking of |
|
I assume But something is subtly broken anyway. I observe that |
|
Thanks a lot for your review, @albel727! I've merged your code, looks all good to me. |
|
I have figured out the hanging bug. This is how it happens:
If the data arrived after the socket check but before But the next invocation doesn't happen, due to to So neither And thus
|
Beats me. Maybe they didn't think it would make a difference, if you loop it the standard First they try to dispatch the already read messages. If there's none and it's because they failed to read them due to lack of memory, they sleep. Otherwise they try to read/write socket with |
And to avoid confusion, by that I mean not our So if a message is read by |
|
Hmm, sounds like we should insert an extra call to |
I'd rather we never use |
|
If you talk about replacing |
Well, a few Nothings is at least something? :-) I'm thinking of people making loops like: In this case, messages will arrive faster if we change the order of |
|
To nitpick, that loop can't work the way you wrote it, since for msg in conn.iter(0) {
match msg {
/* process msgs and Nothings */
}
/* do something else */
}Then yeah, the messages will arrive one iteration faster. Whether this is really worth it, I'm still not sure. If the user is OK with a second delay between each message, I'm not sure they'd be picky about latency anyway. But I guess it wouldn't hurt, so do as you wish. I hadn't think it through before, but So it's like you'd copy-paste the contents of for msg in conn.iter(0) {
let msg = match msg {
Nothing => conn.iter(None).next().unwrap_or(Nothing),
msg => msg
};
match msg {
/* process msgs and Nothings */
}
/* do something else */
}if they really wish it. Considering that we already require Speaking of required things. Another thing that seems to be required for dbus-tokio is a user-customizable callback for either "watches changed" notifications or for "pending_items not empty" notifications. At least I don't see how to wake up the |
Well, actually there are other hacky ways, like never really disabling the On the other hand, tokio never deregisters But then again, in tokio-s case it's at least mitigated by edge-triggering, so it doesn't wake up all that often. Our level-triggered fd will not let the event loop sleep at all, if there's as much as one byte ready for reading. |
Which we'd be unable to consume, I must clarify. Oh, and
That depends on the actual code you have in mind, but no, I don't think this would be a good idea. We don't want to do IO and fill libdbus inner queue before we dispatch everything that's already in there, as it will only mean larger memory footprint with no benefits. Worst case, message dispatching will not be able to keep up with message arrival at all, and we'd have a bufferbloat on our hands. So unconditional |
|
Another reason not to use |
|
I meant like this: To summarize:
I don't get the "All watches disabled" case. What is it that re-enables the watch, is it not during a call to |
I think we should also call need_read in this case, so I pushed a patch that does this. It rarely happens though so I don't have a good test case for it. |
Yeah this is a receipt delay only when we're supposedly idle, but user still finds it OK to delay message reception for a second, so latency isn't their biggest concern. A message might come 5 milliseconds after they received
Oh that's a fun thing really. They in fact don't re-enable the watch during You see, as I mentioned in #92, there are And every received message counts toward it. But a message doesn't stop being counted toward it when dispatched. Even when messages lie in our |
|
Well, as an alternative to watches/pending_items notifications, we can try and screw libdbus counting by using |
|
So
thing never really happens. But calling |
|
You can test the halting driver problem by utilizing #93 and setting the byte limit to some small value. Say conn.register_name("org.dbus.tokio.Test", 7).unwrap();
unsafe { ffi::dbus_connection_set_max_received_size(conn.conn(), 1); }in my test server code above. |
|
Hrm...thanks for the explanation. Annoying. :-/ This is just a bit of brainstorming, but would it work if we, in |
No, as we need to run
Calling that on every It would mean an amply-sized It would depend on user not doing anything with And worst of all,
I'm afraid I don't see anything working here, except an honest notification from watch update callback or at least some high-level callback of our own invention triggered after every |
|
Well, maybe with std::mem::drop(self.0.take());
self.1.notify();in |
|
Well, if we're going to heap-allocate anyway, |
|
And yeah, |
|
Ok ok. So that means, either we need to protect |
I'm not sure how? We can't block waiting for a condition inside
I doubt that deferred handling of events from watch callbacks via queue is a good idea at all. Not only it results in the halted driver problem, but also when But here's the problem. The current Luckily, as far as I can tell, watch removals should now happen only while we're still in If instead of scraping the queue for |
Ah, no, I forgot about |
These two are now pushed. |
|
Just a quick comment: I played around with the dbus-tokio code the last days and it works quite well! So far no compains ;) |
|
I'm going to close this issue as we now have something up and running and nobody seems to be actively complaining or requesting new features. I've opened #99 for further discussion/fix of the watchfd issues. Thanks everybody for helping out! |
It would be nice if it was easier to use this dbus library in an async fashion.
The text was updated successfully, but these errors were encountered: