Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing event loop understanding documentation #9

Closed
diwic opened this issue May 14, 2017 · 10 comments
Closed

Missing event loop understanding documentation #9

diwic opened this issue May 14, 2017 · 10 comments

Comments

@diwic
Copy link

diwic commented May 14, 2017

Sorry if the description is vague, but I just keep hitting my head against the wall trying to understand how the bits and pieces of Tokio fit together and how to make it work with e g my dbus crate. I hope you don't see this as a rant; I'm trying to express my frustration as constructively as I can.

I've seen other main loops; those seem to all support a few crucial methods, which you can then build upon:

  • have a callback ASAP, but at the end of the queue, like node.js's process.nextTick.
  • have a callback after some time, like node.js's process.setTimeout and process.setInterval.
  • have a callback when a file descriptor is ready for reading or writing (or error/hangup/etc)
  • basic stuff like making the main loop run and quit, like glib's g_main_loop_quit.

Out of these, I've only found tokio-timer to deal with the second one. The other three I don't understand or can't find, and trying to read documentation only ends up with me asking myself things like "okay, so a Task is something to be executed in the future, hmm wait, that's what a Future is, so why do we need both concepts?", rather than finding the answers to my questions.

The first one is probably simple, I just don't understand the Tokio architecture to know how to write it.

The third one brson wrote some draft code for and it seems utterly complex to get right for someone new to Tokio, and even includes an extra call to libc::poll which I hope should not be necessary...?

The fourth one; there is Core::run which unlike other loops takes an argument, and instead has no quit method. I guess these are somewhat the same and either style can be used to emulate the other style, but nothing in the documentation about event loops (as far as I've found) indicates how to get a server to quit cleanly; they all end with a let server = listener.incoming().for_each(/*... */); core.run(server);, so it's not obvious how to do it.

I'm sure the documentation is great if you want to build your own network protocol like IMAP or NTP, but for us coming from a completely different angle, it's a different story.

@carllerche
Copy link
Member

I agree that these points are not explained very clearly. I'm going to try to rework the docs sooner than later...

Part of the problem is that the tokio reactor doesn't behave the same as something like node's event loop, so thinking about it as you are (i.e. "have a callback after some time / next tick / etc...) is a bit incorrect, so of course it's going to be confusing (not your fault, the docs aren't structured well).

@alexcrichton
Copy link
Contributor

FWIW if it helps, these are some translations for what may familiar concepts to you:

have a callback ASAP, but at the end of the queue, like node.js's process.nextTick

This is basically:

handle.spawn(future::lazy(|| /* your callback */));

The basic idea is that the core fundamental of the tokio-core event loop is that everything is a future, and that's really the only way to interact with the event loop as well. So in this case you'll execute some code as a future which gets spawned into the reactor later.

have a callback after some time, like node.js's process.setTimeout and process.setInterval.

Similar to above, this'd look like:

let timeout = Timeout::new(dur, &handle).unwrap();
handle.spawn(timeout.then(|_| {
    // code after timeout fires
}));

Like above everything's a future, including Timeout. You're then responsible for spawning that into the reactor and otherwise managing what happens after it fires. there's an Interval type in tokio-core as well for a Stream of events rather than just one event.

have a callback when a file descriptor is ready for reading or writing (or error/hangup/etc)

This is the primary use case of the PollEvented type. You can construct a PollEvented with any type that implements mio::Evented, which you can implement for arbitrary file descriptors through mio::unix::EventedFd. You'll then use various methods on PollEvented to trigger the local future to wake up when it's ready. There's not really a direct analog for "run this callback when a file descriptor is ready" but rather it's moreso "when this file descriptor is ready your future gets notified", and then you define what to do with that notification.

basic stuff like making the main loop run and quit, like glib's g_main_loop_quit.

Right now the idea is that you could do something like core.run(futures::empty()) to run forever, or if you want to run with an exit signal you could do something like:

let (tx, rx) = oneshot::channel();
// ...
core.run(rx).unwrap();

In that example when you send a message on tx you'll cause the reactor to terminate by returning from run.


The general theme here is the "traditional event loop operations" are actually relatively heavyweight and expensive. For example scheduling callbacks involves boxing up closures or implementing support for an exit signal may be unnecessary for some applications. The Core type itself is attempting to be a relatively general-purpose event loop while being as low-cost as possible.

Also in general the theme is that we need much better docs! As @carllerche mentioned we're trying to focus on improving the documentation soon

@diwic
Copy link
Author

diwic commented May 19, 2017

Thanks @alexcrichton for these explanations, indeed it does help. And most of this is clear, just two follow-ups:

There's not really a direct analog for "run this callback when a file descriptor is ready" but rather it's moreso "when this file descriptor is ready your future gets notified", and then you define what to do with that notification.

So, I'm supposed to call a function in the dbus library that takes whatever flags come from poll, so that means I'm supposed to use PollEvented::poll_read and PollEvented::poll_write to reassemble the poll flags (right?), then it seems we're missing at least PollEvented::poll_err and PollEvented::poll_hangup?

In that example when you send a message on tx you'll cause the reactor to terminate by returning from run.

And so if I want to do other things while waiting on that channel, e g, running a server, would I then combine the channel future with the server future using some future combinator, or would I just use handle.spawn for the server future and then core.run for the channel future?

@alexcrichton
Copy link
Contributor

@diwic yeah so unfortunately integrating a library like dbus is likely going to be pretty difficult to integrate with tokio-core. In general it's just a tough problem! I've had some experience with this integrating the curl library into tokio-core with tokio-curl where the unix implementation is pretty hairy. When I was talking with @brson as he sent you your initial PR it sounded like the core primitives of curl vs dbus were similar, so I'd imagine that it's somewhat nontrivial to implement with dbus as well unfortunately :(. I'm more than willing to chat with you online about this! If you have questions feel free to reach out to me on IRC.

Right now stuff like "err" and "hangup" are handled by UnixReady in mio itself, although right now tokio-core doesn't export this through the PollEvented type, leading to PRs such as tokio-rs/tokio-core#199. I'll try to put something together today though which exposes the entire mio::Ready structure so you can poke at all the events. Currently there is no method to listen for the is_hup or is_error event with tokio-core alone.

For running other stuff on the reactor yeah, you've got two options:

  • Futures spawn'd onto the reactor will always run while the core.run call is waiting (e.g. this is the concurrency of tokio-core)
  • The future passed to core.run could use, for example, select. That way when either future resolves (the never ending future of your server or the channel for shutdown) the core.run call will return.

@alexcrichton
Copy link
Contributor

Ok I've submitted tokio-rs/tokio-core#208 to learn about hup/error events

@diwic
Copy link
Author

diwic commented May 20, 2017

@alexcrichton cool, thanks for wanting to help out. Somebody to ask is exactly what I need to be able to make some progress here. Given your previous answer I was able to get a smoke test up and running and the err/hangup things, I think, are only relevant in case dbus shuts down or other weird things happen. Still it would be good to have proper support for it.

If you don't mind me asking in this thread: The next thing I've run into, that I don't understand, is how to cancel futures that I've spawned to the event loop. In essence, how do you do if you have a "master future" that controls a list of "child futures" that should all be on the event loop in parallel, and you want to cancel/remove one of the child futures? I currently do handle.spawn for all child futures, but then I can't cancel them.

@alexcrichton
Copy link
Contributor

Nice! And yeah no worries discussing here.

In general cancellation with futures is "drop the future". So the question of "how do I cancel a future?" is then "how do I drop this future?". This basically means that you'll always need some handle to ownership of the future you'd like to drop if you'd like to cancel it at some point. Typically this happens naturally as pending futures fall out of scope and are canceled (e.g. struct fields or other local variables).

So if you pass a future literally to handle.spawn it's then impossible to cancel it. The event loop took ownership of that future and you've got no way to cancel it (a Core doesn't allow cancelling a running future). What you'd want instead, though, is to hand the Core a wrapper around the future you'd like to cancel. You can sort of see an example of this in rust-lang/futures-rs#455 and also in CpuPool::spawn today. The common theme there is that when you spawn a future you're given back a handle to the completed value. When you drop this handle it sends a signal back to the spawned future that it should exit immediately (and therefore get dropped). With that construction cancellation is "drop the future's handle".

Does that make sense?

@diwic
Copy link
Author

diwic commented May 23, 2017

@alexcrichton Having understood some more of Tokio (i e, that an "inner future" can wakeup an "outer future" because the task is connected to the "outer future", this is how Select works), I think I'll rewrite my implementation to use that strategy instead, which avoids the problem altogether.
I would then use UnparkEvent to know why my task was woken up, but that requires EventSet, and EventSet does not seem implemented for anything? What would typically implement EventSet, perhaps Mutex<HashSet<usize>>...?

Btw, a major source of confusion was that task::park() is very different from thread::park; the former just returns a handle, it does not actually park anything. Maybe task::current() would be a better name?

@alexcrichton
Copy link
Contributor

Ah yeah right now EventSet doesn't have any public implementors as we wanted to retain flexibility moving forward. You may wish to look at the Stack type in the futures crate which implements this (it's basically Mutex<HashSet<usize>> sorta)

About confusion with task::park, I've got good news for you! (we're renaming to task::current)

@carllerche
Copy link
Member

Hopefully this is resolved now. Closing due to inactivity. Feel free to open again on the repo that fits best. This repo is being reclaimed to implement tokio-rs/tokio-rfcs#3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants