Skip to content
This repository has been archived by the owner on Jun 21, 2019. It is now read-only.

RFC: Tokio reform #2

Closed
wants to merge 2 commits into from
Closed

RFC: Tokio reform #2

wants to merge 2 commits into from

Conversation

aturon
Copy link
Contributor

@aturon aturon commented Sep 14, 2017

This RFC proposes to simplify and focus the Tokio project, in an attempt to make
it easier to learn and more productive to use. Specifically:

  • Add a global event loop in tokio-core that is managed automatically by
    default. This change eliminates the need for setting up and managing your own
    event loop in the vast majority of cases.

    • Moreover, remove the distinction between Handle and Remote in
      tokio-core by making Handle both Send and Sync and deprecating
      Remote. Thus, even working with custom event loops becomes simpler.
  • Decouple all task execution functionality from Tokio, instead providing it
    through a standard futures component. As with event loops, provide a default
    global thread pool that suffices for the majority of use-cases, removing the
    need for any manual setup.

    • Moreover, when running tasks thread-locally (for non-Send futures),
      provide more fool-proof APIs that help avoid lost wakeups.
  • Provide the above changes in a new tokio crate, which is a slimmed down
    version of today's tokio-core, and may eventually re-export the contents
    of tokio-io. The tokio-core crate is deprecated, but will remain available
    for backward compatibility. In the long run, most users should only need to
    depend on tokio to use the Tokio stack.

  • Focus documentation primarily on tokio, rather than on
    tokio-proto. Provide a much more extensive set of cookbook-style examples
    and general guidelines, as well as a more in-depth guide to working with
    futures.

Altogether, these changes, together with async/await, should go a long
distance toward making Tokio a newcomer-friendly library.

Rendered

@aturon aturon mentioned this pull request Sep 14, 2017
@alexcrichton
Copy link
Contributor

Thanks @aturon! Some thoughts:

  • I think the current_thread module differs in the leading examples and the detailed design.
  • I might also propose Timeout::new as a convenience constructor? (taking a duration) sort of like we have TcpStream::connect. Is there a reason to remove the top-level constructor though?

Copy link
Member

@carllerche carllerche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments are inline. I will be posting a more comprehensive comment soon.

tokio-reform.md Outdated

On the documentation side, one mistake we made early on in the Tokio project was
to so prominently discuss the `tokio-proto` crate in the documentation. While
the crate was intended to make it very easy to get basic protocol
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be "basic request/response oriented protocol implementations"


fn serve(addr: SocketAddr, handle: Handle) -> impl Future<Item = (), Error = io::Error> {
TcpListener::bind(&addr, &handle)
.into_future()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is into_future needed here? Binding should be immediate?

tokio-reform.md Outdated

fn serve(addr: SocketAddr) -> impl Future<Item = (), Error = io::Error> {
TcpListener::bind(&addr)
.into_future()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is into_future needed here? Binding should be immediate?

tokio-reform.md Outdated

### The `io` module

Finally, there may *eventually* be an `io` modulewith the full contents of the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, typo (modulewith).

Also, I would clarify that it would be "with a subset or the full contents". I think it could be entirely plausible that we don't re-export everything.

could build a solid http2 implementation with it; this has not panned out so
far, though it's possible that the crate could be improved to do so. On the
other hand, it's not clear that it's useful to provide that level of
expressiveness in a general-purpose framework.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say that there are three paths forward

  • Try to improve tokio-proto such that h2 wants to use it (low probability of success i think).
  • Significantly simplify tokio-proto to make it easy to use for simpler cases.
    • Focus on ease of use over raw performance and features.
    • This would most likely mean getting rid of streaming bodies
    • This would also most likely mean that hyper wouldn't use it at all.
  • Completely deprecate it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think there are any major protocols that are simple enough that a production implementation might use a significantly simplified version of tokio-proto?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

istm that it would be better to kill tokio-proto for now, and when we have some experience in h2 and Hyper, then try and factor out a useful library. Designing the library ahead of time seems doomed to failure in this context.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are lots of good things in tokio-proto, such as the multiplexing code. I'd like to see that salvaged, if possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nrc the experience w/ h2 and hyper has been acquired and has informed my list of possible paths forward.

I do agree with @tikue that there is a lot of useful stuff in tokio-proto. As a lib, I don't think it can be used when one wants to implement the most efficient client / server possible, but I do think that it could be useful to get something done fast.

As such, I think that focusing on that case (getting something done fast) could be more successful. This would be admitting that performance sacrifices are fine for ergonomic wins.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I actually found really helpful even though I ended up not using tokio-proto was that it suggested a model for layering the abstractions that I did end up following. It even provided names which would sound familiar to anyone who had looked at tokio-proto before.

I think there is tremendous value in that alone.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just been building an RPC mechanism using tokio and I'm making use of tokio-proto for handling multiplexed messages. I'd be sad if that went away.

@carllerche
Copy link
Member

Thanks @aturon for writing this up. This was quite a good read and am quite happy with how this is turning out. Some thoughts follow.

ReactorId

I'm not sure what this is for or how it is intended to be used.

Reactor and current_thread

I'm not sure how these two constructs are intended to be executed on the same thread. Specifically, Reactor provides turn which blocks the current thread and TaskRunner provides block_on_all which also blocks the current thread. These don't seem compatible.

Breaking down the requirements, the reactor is the structure that actually needs to control blocking (so that it can call epoll_wait).

Either TaskRunner has to implement Future or there has to be a way to adapt TaskRunner -> Future.

Reactor::turn would then take two arguments. First, a future that, on completion, forces it to return and a max timeout.

Something like:

fn turn<T: Future>(&mut self, fut: T, timeout: Option<Duration>) -> Result<T::Item, T>;

In this respect, Reactor::turn would be an executor.

Timer

If Reactor provides the turn API with a timeout, then there is no need to couple a timer with the reactor. That said, the global event loop should probably come with a timer included.

This means that the timer shouldn't be able to take a &Handle argument because it isn't actually paired with a reactor.

So basically, for now timeouts always run on the global event loop thread. Even if the "default Handle" is switched... this could be confusing, but I'm also trying to not front load timer design.

Handle::global

  • Could this be Handle::default() to imply that it's not necessarily a global handle (the handle could be changed).

TcpListener::accept

Right now, TcpListener::accept returns a TcpStream that is bound to the same reactor as the listener. This makes it pretty difficult to implement a multi reactor system.

The exact way to solve this is going to be related to how we allow TcpStream to connect to a custom &Handle.

Swapping the reactor that Handle::default() points to

As I mentioned in gitter, I would like it to be possible to change the reactor that is referenced to by Handle::global but only for an executor context. In other words, I want to be able to great a futures-cpupool and say "Reactor X should be used for all I/O objects created on this pool". As the RFC mentions, allowing adhoc changes to Handle::default() is error prone.

I think Enter provides a good way to achieve this goal.

let swaped_default = reactor::with_handle(&enter);

// stuff....

Copy link
Member

@seanmonstar seanmonstar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the marvelous write-up! It's always a joy to read your RFCS.

tokio-reform.md Outdated
.incoming()
.for_each(move |(conn, _)| {
let (reader, writer) = conn.split();
CurrentThread.spawn(copy(reader, writer).then(move |result| {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the current_thread module differs in the leading examples and the detailed design.

Is this what you meant? This example is out of date?

Ok(())
}));
Ok(())
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this example, it does show a pattern that is extremely common for servers: spawning a listener and then spawning tasks for each accepting socket. I wonder if we could make this sort of thing easier to do, even without async/await.

fn serve(addr: SocketAddr) -> impl Future {
    TcpListener::bind(addr).and_then(|listener| {
        listener.incoming().for_each(|(conn, _)| {
            let (reader, writer) = conn.split();
            // specifically, just return a Future here
            copy(reader, writer).map_err(|err| {
                println!("echo error: {}", err);
            })
        })
    })
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extremely common for servers

Not so much when you need limit on the number of connections. This means either BufferedUnordered, or some kind of semaphor across all the spawned coroutines. Currently, it looks like former solution works fine (with tk-listen extensions though)

@seanmonstar
Copy link
Member

Could this be Handle::default() to imply that it's not necessarily a global handle (the handle could be changed).

This sounds nice!

Right now, TcpListener::accept returns a TcpStream that is bound to the same reactor as the listener. This makes it pretty difficult to implement a multi reactor system.

Though, it sounds like this may not be as much of an issue anymore, since by default, epoll will be on its own thread, and tasks that would be using the TcpStream will be in the separate task runner thread. From what I've seen so far, the reason to have Cores on multiple threads at the moment is because the tasks using the CPU to respond to requests are what need to be scaled, epoll scales fine. No?

@carllerche
Copy link
Member

@seanmonstar

Though, it sounds like this may not be as much of an issue anymore, since by default, epoll will be on its own thread, and tasks that would be using the TcpStream will be in the separate task runner thread. From what I've seen so far, the reason to have Cores on multiple threads at the moment is because the tasks using the CPU to respond to requests are what need to be scaled, epoll scales fine. No?

It doesn't matter if you use the default event loop, but it does if you want to take an approach like Seastar where you have many threads that are fully isolated (reactor per thread, almost like a multi process architecture).

@antoyo
Copy link

antoyo commented Sep 19, 2017

My main pain point with tokio (besides the abstractions as described here) is that it's very hard to debug code using tokio.
Actually, the only way I can think of is using strace while you have more options to debug your code when using mio directly.
I'm not the only one having this issue so I'd be nice to improve the debugging story of the library.
Thanks.

@aturon
Copy link
Contributor Author

aturon commented Sep 19, 2017

@antoyo that should definitely be on the roadmap, thanks!

@Lokathor
Copy link

Example proposal: IRC bot/client.

@alex
Copy link

alex commented Sep 19, 2017

Please don't add a global event loop.

I come from the Python world, and I think if you asked every twisted core developer, 90% of them would say having a global event loop was a mistake. It leads to the following problems:

  • Tons of APIs (both core twisted, and third party) don't take an event loop argument, so they run only on the global one.
  • Lots of other APIs have a default of the global event loop, so even if you do have a second event loop, it's easy to accidentally put things on the wrong event loop.
  • Testing is harder; running two tests in parallel is outright impossible.

There's plenty that's challenging about learning Tokio, but I've found creating a Core and .run()ing a Future to be incredibly smooth and easy.

This is not the hard part of learning Tokio, and I think it's a bad place to optimize. Global state makes testing harder, it makes fuzzing harder, it makes reading code harder.

@carllerche
Copy link
Member

carllerche commented Sep 19, 2017

@alex thanks for the feedback.

Tons of APIs (both core twisted, and third party) don't take an event loop argument, so they run only on the global one.

  • Most libs should ideally be generic over AsyncRead + AsyncWrite (see Hyper and h2 for examples). In which case, they are generic over the event loop.
  • There will be an API to change the default event loop at the executor level.
  • Libs that are missing APIs can be fixed.

Lots of other APIs have a default of the global event loop, so even if you do have a second event loop, it's easy to accidentally put things on the wrong event loop.

This argument applies to any default over configuration option. It's a balance to weigh. I think the ergonomics win especially since most of the time, you probably are fine w/ just using the default event loop.

Testing is harder; running two tests in parallel is outright impossible.

I'm not sure I follow this given that two bits of code that use the same default event loop should be fairly independent. That said, again, changing the default event loop at an executor level will be possible.

I would also add, that as a counterpoint to python, there are many other environments (Go, erlang, node / libuv, ...) in which a global event loop has been successful. Unfortunately, I don't know much about the async I/O story in python, but could the global event loop be a symptom and not the root cause?

@mithrandi
Copy link

As far as Twisted goes, I think you can up that percentage to 100%.

Comparing to asyncio is also instructive: I'm not sure how asyncio programmers feel about it, but asyncio has a sort of hybrid approach with a thread-local(?) event loop that can be switched out, which turns some things that would be impossible into Twisted into merely hard things, but I still think the result is far from ideal.

@aturon
Copy link
Contributor Author

aturon commented Sep 20, 2017

I think it'd be helpful to spell out the cases where we anticipate actually using multiple Tokio event loops (which the Tokio team has viewed as fairly niche). @alex, could you say more about why this desire comes up frequently in Python?

@alex
Copy link

alex commented Sep 20, 2017

Cases that come up that are poorly served by Twisted's global reactor, and which tokio currently does well:

  • Event loop per thread
  • Testing. Lack of isolation between tests is horrid to debug.
  • Calling asynchronous code from synchronous code: in this case it's incredibly useful to be able to create a Core, run the Future, and throw it away.

@aturon
Copy link
Contributor Author

aturon commented Sep 20, 2017

Thanks @alex!

One point that I think is very important to note: this RFC makes a major shift in what an "event loop" even means. In particular, the event loop is no longer tied to task execution. I think this might be part of the disconnect.

Lemme dig in here:

  • Event loop per thread

Can you spell this out a bit more? What's the motivation in more detail?

  • Testing. Lack of isolation between tests is horrid to debug.

So, executor-level customization of the reactor should help with this. But also, note that if you're talking about task execution then none of this applies anyway -- you get totally separate executors.

  • Calling asynchronous code from synchronous code: in this case it's incredibly useful to be able to create a Core, run the Future, and throw it away.

Again, this piece is broken out of Reactor and instead addressed through the current_thread functions.

@Diggsey
Copy link

Diggsey commented Sep 20, 2017

This does seem to improve usability a lot, and generally separate concerns more neatly, so that's good, but the increased reliance on thread-locals, global mutable state, and implicitness/magic is a little concerning.

The default event loop(s) created by tokio

Perhaps this could be clarified in the RFC, but I presume these are created the first time a Handle is default-initialised? These lines are a little ambiguous:

Pushing further along the above lines, we can smooth the path toward a particular setup that we believe will work well for the majority of applications:

  • One dedicated event loop thread, which does not perform any task execution.
  • A global thread pool for executing compute-intensive or blocking tasks; tasks must be Send.
  • Optionally, a dedicated thread for running thread-local (non-Send) tasks in a cooperative fashion. (Requires that the tasks do not perform too much work in a given step).

These are things to be made easier rather than things which should happen automatically, right? It's not clear from the phrasing.

Duplication of methods taking Handle or not

I suppose a language-level feature would be useful here. For lack of that, would a factory-style API work? If not, I think the duplicate APIs should at least follow a consistent naming scheme instead of conflating the customisation of the Handle parameter with customisation of other behaviours, and having totally different names.

For example, if I'm using TcpListener::bind() and decide I want to override the handle. How am I supposed to figure out that I should switch to using TcpListener::from_listener(...) while calling TcpListener::bind from libstd? Overall, passing an extra Default::default() parameter may be the better option, and it's always possible it could be made optional in future via the addition of default arguments as a language feature, without needing even more churn to the API.

TaskRunner

I don't really understand why this both has its own blocking methods, whilst also implementing Future directly (spelt Futue in the RFC btw). This seems like a recipe for disaster: it blurs the line between future combinators vs executors. Assuming I've interpreted it correctly, I think it would make more sense to have a method that returned a future which completed when some condition was met (eg. all scheduled tasks completing) rather than directly implementing Future on the task runner itself, then the name of the method can indicate what the completion criteria are.

Special casing for tokio

This design works fine for tokio, because you seem to have found a way to make it irrelevant whether or not the code performing the I/O actually runs on the event loop or not.

  1. Is this genuinely zero cost? ie. Is there no performance cost to performing the actual I/O operations on a separate thread to the event loop? If there is, is it still possible to get the old behaviour with the new API? I think an example of how that would be done would be useful.

  2. What about other event loops where this is important. For example, code for UI and graphics is usually single-threaded: all interaction with the UI must be done on that thread. This usually requires having complete control over the spawning mechanism (eg. in winforms, controls have an Invoke method which specifically runs some code on the UI thread for that control, which works by sending a specially crafted message onto the event loop).

Going the Handle/Remote route is not ideal as has been discovered. It also means that you have to spawn futures in multiple stages:

  1. use Remote to spawn a function which receives the non-send Handle as input.
  2. use Handle to get access to the UI
  3. create a future using methods on some UI object

You can't just create a future first, and then spawn it onto the UI, because there's no way (short of using thread locals) for the future to receive a reference to the non-Send UI.

This is one of the reasons I wanted Futures to take the current task as a parameter rather than via a thread local: different executors could have different "task" types, as long as they all implemented the same basic Task trait. Now, simple CPU-bound futures would be able to implement Future for all possible Task types, whereas futures which need to be scheduled to a particular loop (eg. a future that updates the UI) could implement Future just for the UITask type, and that type would provide the accessors to the UI itself. This statically prevents you from combining incompatible futures at compile time.

Remaining footguns and best practices

The RFC mentions a couple (eg. multiple executors on one thread). What other footguns are there that are not statically prevented with the new design? Obviously that is hard to answer, but users will need to be able to tell if they're using futures correctly, without danger of deadlocks or tasks being lost. It will not be possible to statically prevent all possible misuses, but it should be possible to document exactly what constraints exist without requiring full knowledge of how futures and executors are implemented.

@alexcrichton
Copy link
Contributor

Thanks for the comments @alex! I figured I'd add to what @carllerche and @aturon mentioned already:

Tons of APIs (both core twisted, and third party) don't take an event loop argument, so they run only on the global one.

In addition to what @carllerche already mentioned I'll add to the idea that I don't think that this "con of Python today" is strictly derived from having a global event loop. One alternative we considered when hashing out this design was to actually continue to have all functions require a Handle, as they do today.

Notably, we'd still have a global event loop! You could call something like Handle::default() to acquire the global event loop's handle (still swappable at the executor level like @carllerche said) but the convention of requiring Handle all the time was thought to mitigate this problem where libraries are tied to one mode or the other.

Even this, though, can have a downside! (and this one is more related to having a global at all) Let's say you've got a big application that didn't want to bother passing around handles, but all the libraries you use take handles. This means that in the bowels of your application you're calling Handle::default a lot. All of a sudden, though, you want to refactor your app to multiple event loops (for whatever reason) and the refactoring is then quite difficult! In other words, passing handles everywhere in libraries didn't help this "application use case".

Interesting thoughts! I've personally wavered on this design quite a bit, but I think it's relatively certain that we're going to want some form of a global event loop. It's just so darn painful in a lot of applications to pass handles everywhere, and a global event loop would solve that ergonomic pain. This does indeed mean that using functions like Handle::default are buying into future pain if you want to not use the default event loop, but that's a price that can be explicitly bought into!

Lots of other APIs have a default of the global event loop, so even if you do have a second event loop, it's easy to accidentally put things on the wrong event loop.

Another very good point! Our hope is that we'd have strong conventions around APIs you provide, for example tokio-core provides "convenience" APIs in this proposal which don't take handles, and then fully expressive APIs which take all arguments (including handles). It's true though that not all third party libraries may follow this same pattern.

It's worth pointing out though that you don't always have control over the third party library use case though. Even if it did get handles passed in everywhere you may want some of the third party library to happen on one event loop and some of it to happen on the other, but it may not provide that level of configuration through its API.

This in general is where we started to conclude that multiple event loops is likely to a relatively niche use case, but if you've got some ideas we'd love to hear them!

Testing is harder; running two tests in parallel is outright impossible.

This I think may be a python-ism rather than a Rust-ism. The global event loop here can't even have foreign code run on it (you can't spawn tasks on it), so in that sense it's totally plausible for an application to have tons of test threads all sharing the same event loop and they can all be executing concurrently/in parallel.

Did you have some specific cases you were worried about, though?

Event loop per thread

One thing I like about this proposal is that it doesn't rule out any existing application architectures. In that sense it's always possible to have an event loop per thread (although @aturon has a good point that diving into the rationale here for this in the first place would be good), so I think it's important for me, at least, to acknowledge that this is mostly a question of ergonomics.

Ergonomically I think that this definitely ties into your previous points about third party libraries and idioms (who's passing handles and who takes handles). This is where @carllerche's "change the default on an executor level" would also come in handy as each thread could be an executor and change its default event loop.

Testing. Lack of isolation between tests is horrid to debug.

It's true that there's not complete 100% isolation between tests if there's a shared event loop, but because we're not running arbitrary code the only vector for bugs (I think) are bugs in the tokio crate itself, which ideally are few and far between! In that sense I don't think that the global reactor make tests any harder to debug than they are today, but if you've got some specifics in mind that'd be helpful!

Calling asynchronous code from synchronous code: in this case it's incredibly useful to be able to create a Core, run the Future, and throw it away.

To add to what @aturon mentioned about current_thread, this is still something we very much want to support! With current_thread it's actually even cheaper than creating a core and throwing it away, because resources like timers and I/O objects are longer lived and don't have to be reallocated each time.

@ishitatsuyuki
Copy link

I'm against globals and particularly, the "spawn" pattern. While this pattern seems to be success in other languages, I consider Rust different, as we have powerful combinators and ownership system. I'm not fond of this RFC for the same reason.

The ideal pattern I propose tries to avoid spawning entirely. All futures should be organized in a tree structure, by chaining asynchronous function responses, and at last running everything combined with Core::run. I think this leads to better modularization, and particularly cancellation is easier. This doesn't remove the need of spawn, it's required under certain circumstances (in streams, or Drop implementation. See the TcpListener code for a case).

Feel free to correct me if I'm wrong.

The reactor module provides just a few types for working with reactors (aka
event loops):

- `Reactor`, an owned reactor. (Used to be `Core`)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're changing the name anyway, could we call this EventLoop (and the module event_loop)? Reactor is really jargon-y and doesn't describe what the object does, witness every time it is mentioned in docs (or even this RFC) having "(aka event loops)" or something with it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One tiny downside to EventLoop is that the desirable variable name is a keyword:

let loop_ = EventLoop::new();

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let event_loop = ...; :-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The good news is that it shouldn't be in the learning on ramp. Even touching Reactor will be for more advanced users.

Also, Reactor is the parlance and has lots of precedent in other environments. As such, those who should be looking for that type probably are already familiar with the naming.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a shame the awesome name reactor::Core is becoming reactor::Reactor.

@cramertj
Copy link

cramertj commented Sep 20, 2017

@ishitatsuyuki While that model (single state machine which owns all child futures) makes it easy to handle ownership and cancellation, it can be much less performant in many cases because the entire state machine must be polled every time an event is delivered from mio.

It'd be really nice to have some performance guidance and heuristics around the thresholds at which it's best to spawn separate futures vs. maintaining a single state machine, in addition to some information about the performance tradeoffs of running multiple event loop threads.

@eminence
Copy link

It should provide an order of magnitude more examples, with a mix of "cookbook"-style snippets and larger case studies.

One class of examples that I would love to see (as I struggle with this every time I work on a tokio-related project) is an expansion of the basic echo server into something with multiple streams/futures. For example, an echo server that will listen on a UDP socket as well as a TCP socket, and have a timer future thrown in as well. How might you combine all 3 things into the same handler or event loop?

@aturon
Copy link
Contributor Author

aturon commented Sep 20, 2017

Thanks @eminence!

And please, everyone else reading this thread: if you have examples you'd like to see, toss 'em out!

@yazaddaruvala
Copy link

yazaddaruvala commented Sep 20, 2017

First question: Why does this all have to happen at the same time?

  • Do we really need to change the crate name, to improve ergonomics?
    • Why not leave the re-name to tokio as a final event after the new API has stabilized within tokio-core? A lot might change before then.
  • Do we really need to impose globals if we move executors to the futures crate?
    • Given we want to add non-ergonomic methods which take an explicit "Core", why not just add the ergonomic helper functions after the rest of this new API is stable?

I'm not saying I agree or disagree with these decisions. It just all seems very fast and honestly forced, and I'm not sure what the rush is. Each of those could be their own follow-up RFCs with long discussions. We have waited this long for Rust's async io story, why not take it one step at a time and ensure it is appropriately thought through?

Second question: Global event loops and global thread pools:

Rayon has a concept of rayon-core from what I understand is to solve issues with versioning conflicts. I didn't see this issue called out in this RFC, so I'm curious. Will the new tokio crate never change its version from 1.0? or is this issue Rayon described solved some other way by Tokio? or did I misunderstand something along the way?

Additionally, Futures+Executors seem to have at-least superficial similarity with Rayon (which is Iterators+Executors), if nothing else, does it make sense to share the rayon-core thread pool? Or work with Rayon to ensure some other common solution?

I have more questions, but I'll start with those.

@Screwtapello
Copy link

Screwtapello commented Sep 20, 2017

Testing. Lack of isolation between tests is horrid to debug.

This I think may be a python-ism rather than a Rust-ism. The global event loop here can't even have foreign code run on it (you can't spawn tasks on it), so in that sense it's totally plausible for an application to have tons of test threads all sharing the same event loop and they can all be executing concurrently/in parallel.

I'm not too familiar with Tokio, but I have done a bunch of work with Twisted. Twisted extends Python's unit-testing library, adding extra sanity checks like "when the test function returns, the reactor should have no registered file-descriptors or queued timers" (i.e. does the system-under-test clean up after itself). That's a very useful post-condition to check, and very difficult if many tests are running in parallel on the same reactor.

@carllerche
Copy link
Member

@Screwtapello None of that would be needed in the proposed system. All handles (TcpStream, TcpListener, etc..) would be owned in the test, so when they go out of scope, they will be dropped (which means removed from the global reactor).

@carllerche
Copy link
Member

@Screwtapello

Do we really need to change the crate name, to improve ergonomics?

Changing the crate name means these changes do not require an 0.2 release. Releasing a tokio-core 0.2 is quite hard due to the implications of breaking downstream dependencies.

Do we really need to impose globals if we move executors to the futures crate?

As explained in the RFC, it significantly reduces the amount of concepts needed to get started w/ Tokio as well as improves ergonomics for the most common cases. It also allows all of the decoupling of the reactor (I/O driver, executor, timers), because most people won't have to set this up. Only those who care will have to learn how all those various components come together to make a runtime.

Given we want to add non-ergonomic methods which take an explicit "Core", why not just add the ergonomic helper functions after the rest of this new API is stable?

does it make sense to share the rayon-core thread pool

It does not, which is why I wrote futures-pool which is similar in spirit to rayon but geared towards futures.

The reasoning:

Rayon is designed to handle parallelizing single computations by breaking them into smaller chunks. The scheduling for each individual chunk doesn't matter as long as the root computation completes in a timely fashion. In other words, Rayon does not provide any guarantees of fairness with regards to how each task gets scheduled.

On the other hand, futures-pool is a general purpose scheduler and attempts to schedule each task fairly. This is the ideal behavior when scheduling a set of unrelated tasks.

@diwic
Copy link

diwic commented Sep 20, 2017

What is the story for running tokio on top of non-mio main loops, e g the glib main loop?

@cramertj
Copy link

cramertj commented Sep 22, 2017

@yazaddaruvala Did you just propose implicit parameters? 😄

@Ralith
Copy link

Ralith commented Sep 22, 2017

@alexcrichton

Perhaps the current_thread module could just be called thread?

This is appealingly consistent with use of std::thread, e.g. std::thread::sleep.

Remove panicking from Drop for TaskRunner

I think this is mandatory to avoid the double-panic case highlighted by @leodasvacas. Ensuring that non-daemon tasks aren't running upon clean shutdown is important for correctness, but using block_on_all already accomplishes that, so this makes perfect sense.

Timers and the global event loop

Strongly in favor of this. IMO the only reasonable alternative would be to not provide Timeout at all, but that would be inconvenient.

@rustonaut
Copy link

@dathinab I don't see why TaskRunner or Spawn would need to be Send, they would be created on a thread of the pool. I guess the futures themselves only need to be Send if the thread pool does work stealing.

It's a bit of a pov think, but if you schedule work on a thread which just happens to be owned
by a thread pool, I wouldn't say you scheduled it on a thread pool. It's more like that you extended
already scheduled work.
This is quite a bit nit picking, but if you document it as scheduling tasks on a thread pool if it is run in one, its just a matter of time until some users get very irritated why their program is so slow and seems to only use one thread...

Edit: This was a misconception on my part, the RFC says it panics only if not already panicking.

you where faster than me 😉

It's still pretty bad if you pass a TaskRunner to a general executor cancel it and it panics.

A typical example could be a library exposes a TaskRunner as a future (opaque with impl Trait), you select over it with a timeout and forget to test the actuall timeout case => panic in production (in more than one sense)

@yazaddaruvala
Copy link

Thanks @cramertj. Clearly I need to improve my Google-fu.

However, it is good to know this idea isn't as left field as I originally thought.

@carllerche
Copy link
Member

carllerche commented Sep 25, 2017

Thoughts on timers

This comment represents an overview of my thoughts on timers. It will provide a bunch of context.

There are generally two categories of timers. Coarse and high resolution timers.

High resolution timers tend to be set for sub second delays and require nanosecond resolution.

In the context of network programming, coarse timers appropriate. In this case, 1ms tends to be the highest level of resolution possible, but usually even 100ms resolution is sufficient. On top of that, even triggering the timeout with error margins as high as 30% is acceptable. This is because, in network programming, timers are usually used for:

  • Catching events that don’t happen (data fails to arrive on the socket).
  • Freeing idle resources (data in a LRU cache is no longer needed).
  • Sending out data to a peer on a period (every thirty seconds, send a ping).

The network is unreliable, as such it is usually OK for timers to be coarse.

Assumptions

When implementing a timer, the following assumptions are safe to make. These assumptions have guided the design of timers across many projects (Linux kernel, Nety, …)

  • Timer management must be as lightweight as possible.
  • The design should scale well as the number of active timers increases.
  • Most timers expire within a few seconds or minutes at most.
  • Long delays are rare.
  • Many, if not most, timeouts will be canceled before they are fired.

Algorithms

There are two common categories of algorithms for implementing timers. A heap and some variation of a hashed timer described in Hashed and Hierarchical Timing Wheels: Data Structures for the Efficient Implementation of a Timer Facility.

Heap timer

This is using a heap data structure to store all timeouts, sorted by expiration time. Heap based timers have the following properties.

  • Trigger timeout: O(1)
  • Set timeout: O(log n)
  • Cancel timeout: O(log n).

Because of the assumptions stated above, heap based timers are rarely appropriate for use in networking related scenarios.

Hashed timer

While hashed timers are fairly simple conceptually, a full description is out of scope for this comment. The paper linked above as well as the overview by Adrian Colyer are good sources. There are a number of variations of the general idea, all with different trade offs, but at a high level are pretty similar.

Hashed based timers have the following properties:

  • Trigger timeout: O(1)
  • Set timeout: O(1)
  • Cancel timeout: O(1)

The various implementation permutations provide differing behavior in terms of the coarseness of the timer, the maximum duration of a timeout, trade offs between CPU & memory, etc… For example, a hashed wheel timer could be configured to have a resolution of 100ms and only support setting timeouts that are less than 5 minutes into the future.

Another option could be a hierarchical timer that supports a resolution of 1ms, support setting timeouts of arbitrary duration, but require some bookkeeping CPU work to happen every 3.2 seconds.

In fact, one characteristic of hashed wheel timers is that, when they are tuned for general purpose cases (i.e. supporting a resolution of 1ms and arbitrarily large duration timeouts), they tend to require book keeping every so often that could block the thread (this will be important later). However, even with these cons, the various hashed timers are much better suited for network programming cases based on the assumptions listed above.

  • Delays in firing timeouts is OK as timeouts don’t need to be precise.
  • This bookkeeping work, while it can create pauses, is much lower than the amount of work needed to maintain a heap based timer, especially as the number of outstanding timeouts grows (potentially in the millions) and most timeouts get canceled.

I/O reactor

This RFC proposes a default, global I/O reactor. Roughly speaking, its job is to spin in a loop, sleeping on epoll_wait, taking all epoll notifications and notifying the tasks that are associated with the I/O resources. The RFC details that, by default, a thread will be spawned up to run this logic, and nothing else. This means that, by default, there will be cross thread communication required for the I/O runtime thread to notify the futures being executed on another thread. As @aturon explained, while there is overhead involved, it isn’t bad. In general, the setup of a work-stealing thread pool used to drive app logic and an I/O thread to drive async I/O events has a lot of precedent and is an ideal default.

Now, enters the question of timers. The RFC details that there is also a default, global timer. A default, global, timer will also require a runtime thread in order to sleep & fire The immediate thought would be that, since there already is a runtime thread required for the I/O reactor that this thread can be reused to drive the timer.

This would not be ideal. If cross thread communication is required to drive a default, global timer, a dedicated thread should be used. This is because:

  • The ideal timer algorithm is hashed based.
  • A hashed based timer tuned for general purpose requires CPU pauses to perform book keeping work.

While pauses delaying triggering timers is not critical for timers (see previous section), if they are are run on the same thread as the I/O reactor, they could cause delays in dispatching I/O events, which is not acceptable.

Running the timer on a dedicated thread solves these problems.

Executors

Now, lets take a moment to talk about strategies for executing futures. Generally, these are the same considerations taken when scheduling actors or green threads. As such, lessons from other environments apply here.

Locality, locality, locality!

When executing small tasks across a set of threads, you want to move data around threads as little as possible. This improves cache hits, locality, reduces synchronization, etc…

This guiding principle highly influenced the design of futures-rs’s futures and tasks. This is why futures own their own data and futures themselves are owned by executors, which are responsible for scheduling those futures. Hopefully, those executors are as smart as possible and avoid moving the futures across threads.

In fact, an efficient scheduling strategy is a pool of threads that keeps all futures local as much as possible. This is the work-stealing approach implemented by futures-pool.

Now, given that locality is key. The way timers would fit in to futures-pool would be for each thread in the pool to have its own timer. This way, when a future sets a timeout, it would require no cross thread communication and when the timeout fires, it would notify a future that is (ideally) on its thread.

So, given that I mentioned that the timer will have significant pauses. One could ask: wouldn’t it be bad to run on a future scheduler? No, because of the following reasons:

  • The timer bookkeeping work would be spread out between scheduling futures (this doesn’t work in the case of the I/O reactor because epoll_wait is a sys call, which is quite heavy and we don’t have any visibility into how much work is pending in the epoll queue).
  • Because the timer is almost entirely for futures that are on the same thread, if the thread is scheduling a future, there are by definition no timeouts that can fire because any timeout that would fire would notify a future on the same thread that can’t run until the current future is done executing (and the reverse applies to).

Lastly, you might ask, if it makes sense to have a timer per thread, shouldn’t we also have an I/O reactor per thread? In short, the answer is: ideally we would (see seastar but doing so requires a user land TCP stack. OS level async I/O primitives don’t provide the necessary flexibility to run an I/O reactor per thread efficiently (the exact reasons are out of scope of this comment).

Conclusion

The point of this lengthy comment is to illustrate why, in most cases, it is better to keep the I/O reactor and the timer on separate threads. Specifically, the only time they would be on the same thread is when the entire system is single threaded.

Thus, it does not make sense to bake in a timer to the I/O reactor or to pass an I/O reactor handle when setting a timeout. This links the timers to the I/O reactor, which, as this comment argues, is the opposite of what is ideal. I am opposed to any proposal that makes the Tokio &Handle a timer handle.

@bluetech
Copy link

@carllerche I am not an expert in this by a long shot, but have you considered letting the kernel handle the timers, e.g. timerfd in Linux? Too slow? Too much overhead? Not portable?

@carllerche
Copy link
Member

@bluetech

timerfd on Linux is backed by a great general purpose timer implementation. There are also advantages to having the OS handle timers. However, the drawbacks are:

  • It isn't portable.
  • Using it requires syscalls.
  • It is a general purpose timer. You can usually do much better with a timer tuned to the use case at hand.

@tanriol
Copy link

tanriol commented Sep 25, 2017

@carllerche What timer implementations will be available in Tokio? Will it still be possible to have a single thread running a core, executor and high-res timers used in the futures scheduled on it?

@carllerche
Copy link
Member

@tanriol I would like the timer implementation to be completely decoupled from the reactor. This would let you swap in whatever impl you want. And yes, the goal would be for it to still be possible to run everything (I/O reactor, timer, and executor) on a single thread.

However, you cannot pair a high-res timer w/ tokio (and probably futures in general...). Assuming you mean a high-res timer in the sub millisecond granularity, this just isn't possible due to OS APIs being roughly ms granularity and up.

@aturon
Copy link
Contributor Author

aturon commented Sep 25, 2017

Thanks @carllerche for the writeup about timers!

I've been talking some with @alexcrichton and @carllerche, and want to propose we make some revisions to the RFC based on the feedback so far:

  • Tighten up naming and Drop behavior around the current_thread module, as proposed by @alexcrichton

  • Refactor the turn_until API, again as proposed by @alexcrichton. This should be considered a lowest-level primitive for working with reactors directly.

  • Potentially extract timer functionality out of this crate altogether, but at any rate have it use a dedicated thread by default (much like tokio-timer).

It's also clear that we need a much more crisp story about how to customize the defaults the library would provide. In particular:

Hard constraints

  • Must be possible to use arbitrary libraries from the ecosystem, with a guarantee that no threads are spawned
    • Implies: must be able to “repoint” the global reactor + timer to a custom one you’re managing yourself
  • Must be able to customize the defaults at least at the executor level
  • Must be easy to tell, for certain, what reactor/timer a given piece of code is using
    • e.g., it can use the global default (whatever I set that to), or any reactor an already-bound I/O object that I pass in is using

Note: these hard constraints mean, in particular, that the library is fully "zero cost" in the sense of "pay only for what you use": if you want to exert full control over reactor and timer management, you can do so, and no threads or reactors will be created behind your back. But for the common case, you can use the defaults and have a good experience.

Soft constraints

  • Align as much as possible with executors
    • In particular, try to make executors be, in general, an “execution context” for a task, which says how to find various expected “global” resources
  • Avoid need for Handle-passing APIs if possible
    • This is in tension with the previous goal, because of the need to create an I/O object in one task, but use it in another
    • May be possible to work around this by “lazily binding” I/O objects to reactors

@alexcrichton is going to look into some concrete APIs for meeting the above constraints. It seems best for this RFC to include the basic customization story, to ensure that we have the bases fully covered. After that, I'll plan to update the RFC text itself.

@twmb
Copy link

twmb commented Sep 26, 2017

I've spent way too long reading this entire thread top to bottom twice now
to be able to ask questions and reply appropriately.

In the end, I'm generally for these changes, but before getting into that,
I'd like to ask questions and make comments on everything else. I'm sure that
some of these questions were answered and I'm just being dense, so please bear
with me if I ask something that has been answered, or ask for further
clarification. Thanks!

Re: rendered RFC

On the other hand, if the woken task does non-trivial work (even just to
determine what I/O to do), it is effectively blocking the further processing
and dispatch of I/O events, which can limit the potential for parallelism.

This is only true if the reactor is used for multiple threads, right? If
there is one core per thread, it makes sense to block further io dispatch
because being blocked means that thread is working.

  • A global thread pool for executing compute-intensive or blocking tasks;
    tasks must be Send.
  • Optionally, a dedicated thread for running thread-local (non-Send) tasks in
    a cooperative fashion. (Requires that the tasks do not perform too much
    work in a given step).

How would this look in libraries that need to spawn or return tasks? I think
this would force most library authors to make their tasks Send, right?

we can incorporate recent changes to mio

Out of curiosity, what recent changes is this referring to? I looked through
the commit subject's for a few pages and none jumped out.

we need control over the way the event loop is run in order to support
timers

Why is this the case?

  • This is later discussed in much more detail from
    and @alexcrichton
    @carllerche,
    but I think a one or two sentence summary on this would be nice. @carllerche's
    Conclusion is pretty good on this matter, but I know that there is a lot of
    confusion about timer's in this thread.

spawn_daemon

For my own curiosity, when is this useful?

  • This is later answered with @alexchrichton's comment

this API is carefully crafted to help ensure that spawned tasks are actually
run to completion

How does this differ from before? I saw it a few times in this comment thread,
but what is an exact case where a spawned task is not run to completion?

executor's enter()

Does this API need to exist? Can it not just be a hidden thread local variable?

Re: first comment

Moreover, when running tasks thread-locally ... provide more fool-proof APIs
that help avoid lost wakeups

When and why do lost wakeups happen today?

If Reactor provides the turn API with a timeout, then there is no need to
couple a timer ...

I'm a bit confused here, so I'm going to write what I think it means, ask
for confirmation, and then ask a question or two:

The global event loop should probably come with a timer because turn takes
a Duration, and that needs a timer to run, right? The next line indicates
that a timer isn't paired with a reactor - but in this scenario, wouldn't each
reactor be paired specifically with its one timer?

  • In terms of the "this is talked about later" comments above - again, this is
    talked about. One part not talked about is whether multiple reactors would
    all share the same timer thread or not.
Re: why multiple event loops

where we anticipate actually using multiple Tokio event loops

I agree with @alex's event loop per thread.
I thought @carllerche already
implied something like this a few comments back ("threads that are fully isolated").

Re: event loops, again

This does indeed mean that using functions like Handle::default are buying
into future pain if you want to not use the default event loop, but that's a
price that can be explicitly bought into!

What would be unfortunate here is that all libraries that use the global event
loop would be eliminated immediately.

The compromise of overriding the default event loop seems to make this concern
go away.

Re: different profile workloads

Ultimately it gets saturated with different profile workloads and cause
starvation, over utilization, thrashing, etc

This is also something to worry about when doing a lot of file IO - we want to
do file io on separate threads not only because they sleep but to better let
the OS predict these threads don't need scheduled as much.

Summary Comment about how most concerns are resolved

This comment agrees that a default that can be overriden solves a lot of issues
that people have with it - really, overrideable defaults seems to solve most
issues in the thread. However, it does bring up a problematic issue of tokio in
dynamically linked libraries and what threads those are using.

Timer's cannot take a &Handle

I'm still confused on why - the reason here is "the goal was to decouple timers
from the I/O reactor", but I'm still not fully sure on why that means Timer's,
which are decoupled from I/O reactor, can't take a reactor anyway?

In terms of not sending data across threads

Because the sockets stay on the thread that use them. Thus, the reads /
writes all happen on the thread that use them.

But these sockets need to send events to register with epoll to the reactor,
which is the cross thread communication.

But again, this shouldn't be a problem because the default global reactor can
be overridden on a per-executor basis, meaning the Seastar architecture can
still be imitated.

Ergonomic need

the ergonomic goal here is to avoid the need to pass around information
that's just going to be the same everywhere

For this point, I'm not so sure. Go has, over the prior few releases, been
slowly introducing context through anything
that has a connection. I know Go is derided a lot around Rust, but context
itself has proven to not be awkward.

Re: name shortening

thread as opposed to current_thread is shorter, but for me, mental context
when reading the code is lost. Whose thread? Which thread? I disagree with
@Ralith's later comment
about how it is appealingly consistent. std::thread is for thread utilities,
std::thread::current is the function that gets the current thread.

Same with Tasks - what does that mean? TaskRunner is obvious. Spawner is
obvious. These are words that imply what they do.

Also, later, NotifyHandle - this has always read to me as an action (notify
this handle!), whereas it's actually an object, a handle that can notify. Small
distinctions like this make the API incredibly confusing to read through for
me.

General comments

There are multiple places in this thread that mention the main problem is
documentation. I definitely agree, but not just that examples are missing. It's
to to know exactly what happens as futures chain into each other. Are they
run sequentially until one blocks, or is there a point at which other futures
are maybe run? What happens when a future returns Async::NotReady but
doesn't install something that will wake it back up? It never gets called
again - how do I change that for some user space thing that is difficult to
install notifications for?

As a last critique, I want to bring up again the api naming. For Reactor, why
turn or turn_until? Every time I see these names without the corresponding
documentation, I am confused. If the answer is the same as this comment in
that it the parlance for those in the know that have history, I'm not too large
a fan of keeping that parlance. It seems like a bit odd to choose a more
obscure name even if it is more technically correct, and, other than the
Wikipedia page on "Reactor pattern" I've never seen it before.

Overall, I'm conflicted about this change. Global state is usually code
smell. This new proposal is introducing a runtime into a language whose core focus
is minimal overhead and speed.

However, the compromises that @aturon listed as hard constraints
just above should truly mean that users can override the global state
defaults with what they want. Since everything in the tokio ecosystem would
need one or all of a handle, a reactor, or a timer, it makes sense to have
this global state. Since that global state can be overridden, it seems
non-controversial.

About the only thing that does not seem addressed is the comment about
loading DLLs
that may be using threads.

Did I capture everything? Also, sorry for any odd formatting, I took notes in and wrote this in an editor.

@carllerche
Copy link
Member

@twmb

This is only true if the reactor is used for multiple threads, right? If there is one core per thread, it makes sense to block further io dispatch because being blocked means that thread is working.

Yes and no. The question is if every person doing anything async should have to always think about keeping their tasks short running. It's not always obvious when a bit of async code consumes too much CPU. Even if you are running a reactor per thread, while you will keep the CPUs busy, you will starve other tasks that are waiting for time.

Basically, running a reactor & executor on the same thread can be more efficient but is harder to get right.

How would this look in libraries that need to spawn or return tasks? I think this would force most library authors to make their tasks Send, right?

This puts emphasis on Send, but it isn't a requirement. Lib authors can request an executor that is !Send.

Out of curiosity, what recent changes is this referring to? I looked through the commit subject's for a few pages and none jumped out.

It is referring to work that has happened throughout most of 2017.

re: executor enter

Does this API need to exist? Can it not just be a hidden thread local variable?

The API is intended to only by used by authors of executors, not end users.

When and why do lost wakeups happen today?

It's a common issue that comes up in gitter / IRC. Users spawn tasks onto an executor but never start the executor. A local thread executor can't be implicitly started (unlike a thread pool), so this results in the spawned tasks never running and it is confusing to debug for new users.

@carllerche
Copy link
Member

@twmb

The global event loop should probably come with a timer because turn takes
a Duration, and that needs a timer to run, right?

The duration only indicates the max duration to block. This is passed through to epoll_wait (and the equivalent on other OSes).

But these sockets need to send events to register with epoll to the reactor,
which is the cross thread communication.

No, epoll ops happen on the same thread. epoll is Sync.

Re: go context, that is a separate feature and is similar to finagle's async task local variables (or whatever they call it).

@aturon aturon closed this Sep 27, 2017
@aturon
Copy link
Contributor Author

aturon commented Sep 27, 2017

I've made several updates to the RFC based on the feedback here, but due to a Github glitch, I need to put this in a new PR. In any case, here are the key changes:

  • The RFC now includes a fleshed-out API for customizing the default reactor, which can be used to prevent any automatic thread-spawning. This gives you complete control if you want it.

  • The timer APIs are moved out to a futures-timer crate, and by default spawn a dedicated timer thread (as per @carllerche's argument). We eventually plan to make this behavior customizable as well, but want to punt on that bit of design work for now.

  • The current_thread APIs are significantly tightened up, and in particular the panic-on-drop behavior has been removed. The main downside to this change is that, if you want to couple together a reactor and a single-threaded executor, you'll need to do this manually using FuturesUnordered. However, external crates (like tokio-core!) can provide this coupling for you.

@realcr
Copy link

realcr commented Nov 26, 2017

Thank you for the great work on Tokio. This is my favourite rust crate.

I agree with @alex. Please don't add a global event loop, or anything global.
I did some of async programming, first with Twisted and then with asyncio (Both python). I was also a contributor to Twisted. The global event loop caused me lots of troubles.

(1) Almost every API I used had a hidden possibility to add an argument of loop=.... Each such API was a chance to shoot yourself in the foot. If you forgot to specify your own loop, the default loop will be used by default and nobody will tell you about it. Then the debugging hell begins. I remember one time it took me a few days to find the API that defaulted to the global loop.

As a concrete example, take a look at this pythonic API (asyncio):

asyncio.ensure_future(coro_or_future, *, loop=None)

From here.

This is the equivalent of spawn in rust Tokio. If you accidentally forget to specify loop=my_loop in your test case, you should expect days of debugging, searching for the problem.

(2) There are the worse cases of libraries that don't let you put in your own loop. They just assume that you will want to use the default one. It could become very difficult to test your code against those kind of libraries.

One experience I had was testing asyncio python code that waits. I created a mock event loop that simulated the passage of time (Called asyncio time travel), because I couldn't afford having my tests wait a few minutes for a timeout. The author of a library I used decided to use the global event loop, and I couldn't replace it with my mock time travel loop. In the end I had to create a test environment that actually waits many minutes in order to run the tests.

(3)

there are many other environments (Go, erlang, node / libuv, ...) in which a global event loop has been successful.

Rust is not the same. Rust is a systems programming language. I like Rust because it is very explicit. There are no hidden stuff. When I write code in rust I don't want implicit global things to happen. If I want an event loop, I will create one.

I really liked the current rust Tokio way of creating a new loop and then putting your futures into it, and I prefer it every day over other magically global loops constructions.
I also think that having an explicit creation of an event loop follows the way of rust, rather than having a global construct that is created implicitly for you.
It is true that people could write their libraries to allow using the default global event loop, and also give an alternative API for handing over your custom loop, but people tend to take the easier path.

For me the hard parts about learning Tokio wasn't the event loop. The event loop code is usually just a few lines that you slap in the end, after you wrote all your futures.

The hard parts were finding out that mixing futures with references and lifetimes doesn't go very well, and understanding how to transform the types correctly, so that I won't get huge cryptic compile messages about mismatch in the error type between two futures.

Marwes pushed a commit to Marwes/tokio-uds that referenced this pull request Feb 13, 2018
Marwes pushed a commit to Marwes/tokio-uds that referenced this pull request Mar 24, 2018
Marwes pushed a commit to Marwes/tokio-uds that referenced this pull request Mar 24, 2018
Marwes pushed a commit to Marwes/tokio-uds that referenced this pull request Apr 2, 2018
Marwes pushed a commit to Marwes/tokio-uds that referenced this pull request Apr 2, 2018
Marwes pushed a commit to Marwes/tokio-uds that referenced this pull request Apr 2, 2018
Marwes pushed a commit to Marwes/tokio-uds that referenced this pull request Apr 20, 2018
Marwes pushed a commit to Marwes/tokio-uds that referenced this pull request Apr 20, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet