Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upRFC: stabilize `std::task` and `std::future::Future` #2592
Conversation
aturon
added some commits
Apr 24, 2018
aturon
added
the
T-libs
label
Nov 10, 2018
This comment has been minimized.
This comment has been minimized.
|
cc @rust-lang/lang -- I haven't tagged this as T-lang, since the Lang Team already approved the async/await RFC and this is "just" about library APIs. But of course y'all should feel free to weigh in. cc @Nemo157 @MajorBreakfast @tinaun @carllerche @seanmonstar @olix0r |
This comment has been minimized.
This comment has been minimized.
|
@cramertj can you comment on |
This comment has been minimized.
This comment has been minimized.
|
cc @rust-lang/libs, please take a look! |
This comment has been minimized.
This comment has been minimized.
|
@cramertj I only briefly mentioned Fuchsia in the RFC, but it might be helpful for you/your team to leave some commentary here about your experience with the various iterations of the futures APIs. |
This comment has been minimized.
This comment has been minimized.
Although this isn't strictly relevant to the technical merits of the proposed APIs, considering the sheer scope and history of what we're talking about adding to std it seems worth asking: Are there any blog posts discussing Fuchsia's experience in more detail? This is the only part of the historical context I was completely unaware of, and I couldn't find any place that talks about it. EDIT: I swear I started typing this before aturon's last comment |
This comment has been minimized.
This comment has been minimized.
|
Thanks for putting this together. My experience with the proposed If the proposed Because of this, my plan for Tokio will be to stick on Also, most of Tokio would require the I understand the desire to drive progress forward. As I said, as far as I can tell, the proposed Edit: I should clarify, Tokio will add support for |
This comment has been minimized.
This comment has been minimized.
|
@carllerche You and I have talked about this a bunch on other channels, so I'll be repeating myself, but I want to write a response here so that everyone else is on the same page as well. There are indeed limitations with async/await today, due not so much to the feature itself as the lack of impl-trait-in-traits (or existential types) working sufficiently well (as well as, ultimately, GATs). They limit the ability to move foundational libraries to use async/await internally, and that's part of the reason we're not ready to stabilize the syntax itself yet. However, to be clear, none of these limitations connect to the The 0.1/0.3 compatibility system, which allows for fine-grained/incremental migration, ends up doing a lot to lower the stakes. For example, it's already fairly painless to write code for hyper using I think everything else you raise is discussed in the RFC as well, so I don't have more to add there! |
This comment has been minimized.
This comment has been minimized.
|
What’s the rationale for having both task and future modules? Since future only includes Future, it seems that having two modules doesn’t pull its weight. Are we expecting to move a bunch of future stuff into std in the future? |
This comment has been minimized.
This comment has been minimized.
Redrield
commented
Nov 11, 2018
|
@nrc I'm not sure but perhaps the |
This comment has been minimized.
This comment has been minimized.
ivandardi
commented
Nov 11, 2018
|
If Future is a trait then why isn't this example in the RFC
written as
? |
This comment has been minimized.
This comment has been minimized.
rpjohnst
commented
Nov 11, 2018
|
A small bikeshed: I'm not sure I totally understand all the layers of |
| onto a single operating system thread. | ||
|
|
||
| To perform this cooperative scheduling we use a technique sometimes referred to | ||
| as a "trampoline". When a task would otherwise need to block waiting for some |
This comment has been minimized.
This comment has been minimized.
glaebhoerl
Nov 11, 2018
Contributor
Is this the same "trampoline" concept which is used in the context of avoiding stack overflows for recursive calls?
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
@ivandardi that's related to the async/await syntax rather than the library code provided here. The design as I understand it is that |
This comment has been minimized.
This comment has been minimized.
Yes, both modules are expected to grow substantially over time. The futures crate contains a similar module hierarchy with a much richer set of APIs. In addition, there will eventually be a |
jethrogb
referenced this pull request
Nov 11, 2018
Open
Tracking issue for async/await (RFC 2394) #50547
| an API with greater flexibility for the cases where `Arc` is problematic. | ||
|
|
||
| In general async values are not coupled to any particular executor, so we use trait | ||
| objects to handle waking. These come in two forms: `Waker` for the general case, and |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
| Task execution always happens in the context of a `LocalWaker` that can be used to | ||
| wake the task up locally, or converted into a `Waker` that can be sent to other threads. | ||
|
|
||
| It's possible to construct a `Waker` using `From<Arc<dyn Wake>>`. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
| /// - [`Poll::Ready(val)`] with the result `val` of this future if it | ||
| /// finished successfully. | ||
| /// | ||
| /// Once a future has finished, clients should not `poll` it again. |
This comment has been minimized.
This comment has been minimized.
jethrogb
Nov 11, 2018
Contributor
No behavior is specified for when clients do do that. I think we should say something. For example, "implementors may panic".
This comment has been minimized.
This comment has been minimized.
Nemo157
Nov 11, 2018
•
Contributor
I think we could even go a bit stronger than that, “implementors should panic, but clients may not rely on this”. All async fn futures guarantee this, and I believe so do the current futures 0.3 adaptors.
I think it would also be good to mention that calling poll again must not cause memory unsafety. The current mention that it can do anything at all makes it seem like it is allowed to have undefined behaviour, but since this is not an unsafe fn the implementer cannot rely on the client’s behaviour for memory safety purposes.
| When a task returns `Poll::Ready`, the executor knows the task has completed and | ||
| can be dropped. | ||
|
|
||
| ### Waking up |
This comment has been minimized.
This comment has been minimized.
jethrogb
Nov 11, 2018
•
Contributor
It's unclear at first glance which of the code blocks starting with trait Wake/struct ExecutorInner/struct Waker are proposed for stabilization.
This comment has been minimized.
This comment has been minimized.
brain0
commented
Nov 11, 2018
|
(Probably none of you know me, yet I'd still like to offer my opinion, if that is appropriate.) It is my understanding that the purpose of having an unstable API is that the ecosystem can experiment with it to ultimately avoid stabilizing a bad API. I don't see that this has happened here. Tokio has created a shim that essentially wraps an "std future" into a "0.1 future" with the only purpose of allowing async/await style futures. Apart from that, I haven't seen any experimentation with the std::future API. If tokio (as indicated above) is not even planning to use the new API instead of the old futures 0.1 API, then stabilizing it as-is will IMO be very bad for the ecosystem. The situation for std::task is worse: From what I can see, it hasn't been used at all. The tokio shim merely provides a noop waker to satisfy std::future's poll signature, but that waker cannot be used and even panics when you try to. I'd like to see any implementation that actually uses std::task - I've been following TWIR all year and haven't found anything. I cannot see that there are comprehensive examples in the docs for std::task, or any reference implementations that show how the system is meant to be used as a whole. My information is probably incomplete, so please tell me if I am missing anything. As a side note, I started implementing an "as simple as possible" task executor based on the std APIs, just to understand them and play with them. I found the std::task stuff really complicated, and quickly realized that it still couldn't do everything I needed - most importantly, I needed to access some of the internal data in my Wake implementation, but this was not possible with LocalWaker. I would have to resort to storing information in thread-locals again, which defies the purpose of having the waker passed as an argument to poll. |
This comment has been minimized.
This comment has been minimized.
|
Tokio is not the only user of futures, Fuchsia uses this API, including writing their own executor.
… On Nov 11, 2018, at 1:28 PM, brain0 ***@***.***> wrote:
(Probably none of you know me, yet I'd still like to offer my opinion, if that is appropriate.)
It is my understanding that the purpose of having an unstable API is that the ecosystem can experiment with it to ultimately avoid stabilizing a bad API. I don't see that this has happened here. Tokio has created a shim that essentially wraps an "std future" into a "0.1 future" with the only purpose of allowing async/await style futures. Apart from that, I haven't seen any experimentation with the std::future API. If tokio (as indicated above) is not even planning to use the new API instead of the old futures 0.1 API, then stabilizing it as-is will IMO be very bad for the ecosystem.
The situation for std::task is worse: From what I can see, it hasn't been used at all. The tokio shim merely provides a noop waker to satisfy std::future's poll signature, but that waker cannot be used and even panics when you try to. I'd like to see any implementation that actually uses std::task - I've been following TWIR all year and haven't found anything. I cannot see that there are comprehensive examples in the docs for std::task, or any reference implementations that show how the system is meant to be used as a whole.
My information is probably incomplete, so please tell me if I am missing anything.
As a side note, I started implementing an "as simple as possible" task executor based on the std APIs, just to understand them and play with them. I found the std::task stuff really complicated, and quickly realized that it still couldn't do everything I needed - most importantly, I needed to access some of the internal data in my Wake implementation, but this was not possible with LocalWaker. I would have to resort to storing information in thread-locals again, which defies the purpose of having the waker passed as an argument to poll.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
This comment has been minimized.
This comment has been minimized.
brain0
commented
Nov 11, 2018
|
Right, sorry, I must have overlooked that, it's even mentioned in the RFC. Is that stuff open source? I'd love to look at it. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
brain0
commented
Nov 11, 2018
|
Someone on reddit found this: https://fuchsia.googlesource.com/garnet/+/master/public/rust/fuchsia-async/src/ (executor.rs is interesting, for example). |
This comment has been minimized.
This comment has been minimized.
|
@brain0 after the weekend, I expect that @cramertj (or others from the Fuchsia team) will write in with more extensive detail about their experiences. The RFC also went to some length to lay out the historical context. These APIs have seen plenty of use, both on top of Tokio/Hyper (through various shims) in e.g. web framework code, in embedded settings, and in custom operating systems (Fuchsia). Could you spell out your concern re: the task system? It'd be helpful to keep discussion focused on specifics if possible. |
This comment has been minimized.
This comment has been minimized.
brain0
commented
Nov 11, 2018
|
@aturon First things first: Last weekend, I decided to try out the std::task and std::future system by implementing the simplest task executor I could think of, then combine that with mio. Of course, the result would not be as feature-rich or performant as tokio, but it would demonstrate the new APIs. There were lots of little details that felt "weird" about the task system:
Sorry for the rather verbose reply, I hope it still helped you understand my concerns. |
This comment has been minimized.
This comment has been minimized.
|
I've added T-lang to this RFC since some of the traits are more or less |
withoutboats
removed
the
T-lang
label
Nov 29, 2018
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
For folks who weren't aware, there's more conversation ongoing in aturon#15 discussing the precise API of |
rpjohnst
referenced this pull request
Dec 7, 2018
Merged
Move the descriptions of LocalWaker and Waker and the primary focus. #15
This comment has been minimized.
This comment has been minimized.
|
MPI asynchronous APIs look like this: extern {
fn mpi_op(request: *mut MPI_Request);
}And one uses them like this: // Allocate a request and pin it:
let mut request: MPI_Request = MPI_REQUEST_NULL;
pin_mut!(request);
// Schedule an asynchronous operation:
unsafe { mpi_op(&mut request) }
extern { fn mpi_test(request: *mut MPI_Request); }
extern { fn mpi_wait(request: *mut MPI_Request); }
// poll:
if unsafe { mpi_test(&mut request) } { .. done .. }
.. do something ...
// poll
if unsafe { mpi_test(&mut request) } { .. done .. }
.. do something ..
// block
unsafe { mpi_wait(&mut request) }That is, MPI notifies task completion by writing the task status to some memory into the user process. This can happen synchronously, e.g., in What is the best way to map APIs like these to I find mapping this API to I could just implement the I ended up completely ignoring the What I currently do is poll the futures manually to completion. I probably will end up creating my own executor that:
Either way, ignoring the The PoC experiment is here: https://github.com/gnzlbg/ampi |
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Dec 14, 2018
|
@gnzlbg This should maybe be discussed elsewhere, since it's an application specific problem. However since it also sheds some light on what The short version: The MPI API you present doesn't really seem suitable for being wrapped in The really only thing you can do is spinning up a second thread which monitors MPI operations, and calls Now this is part1, however you might run into even more challenges: Rust A solution for this can be to create some kind of Since this is all a lot of hassle, the best solution for integrating those APIs into async code might be to just execute the operations in a synchronous fashion ( |
This comment has been minimized.
This comment has been minimized.
Yes, this is certainly the feeling I was getting, which was resulting in some frustration.
Generally no, these APIs are not thread safe. One can request thread-safety on initialization, which adds some synchronization overhead, and one can query the "level" of thread-safety available for the current process. See
There is an API for this:
There is also an API for this:
One of the main advantages of MPI is the ability to dispatch DMA or RMA operations and just let it happen while doing something else in the same single thread. If one can spawn multiple threads, then having a thread pool with a couple of threads, where one thread always schedule all MPI operations as quickly as possible, and the other threads just block on them, might definitely be an alternative, but this is often not desired. (EDIT: I expect this to be easy to build on top of a zero-cost API, but a zero-cost API cannot be built on top of this).
I don't think this is a realistic option. While most MPI implementations nowadays rely at least partially on |
This comment has been minimized.
This comment has been minimized.
tmandry
commented
Dec 14, 2018
•
|
At a high level: Have a thread that does nothing but wait on all outstanding MPI operations with Adding new operations to be waited on is one tricky part. You may need a special MPI operation that you can manually "complete", to wake the thread up again once you've sent it a new operation. Then the thread can update the list of ops it passes to |
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Dec 14, 2018
|
@tmandry Yes, after seeing those APIs, running |
This comment has been minimized.
This comment has been minimized.
MPI lets you drive asynchronous execution without doing any memory allocations in a single-threaded process. The problem I don't know how to solve is how to offer a nice If the added overhead of boxing Futures, spawning multiple threads, using |
This comment has been minimized.
This comment has been minimized.
Ralith
commented
Dec 14, 2018
|
You should be able to write an executor that calls |
Matthias247
added some commits
Nov 18, 2018
This comment has been minimized.
This comment has been minimized.
|
This RFC has been updated to reflect the proposed changes made by @Matthias247 in aturon#15 |
| The implementation of an executor schedules the tasks it owns in a cooperative | ||
| fashion. It is up to the implementation of an executor whether one or more | ||
| operation system threads are used for this, as well as how many tasks can be |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
| marker traits, while `LocalWaker` doesn't. This means a `Waker` can be sent to | ||
| another thread and stored there in order to wake up the associated task later on, | ||
| while a `LocalWaker` cannot be sent. Depending on the capabilities of the underlying | ||
| executor a `LocalWaker` can be converted into a `Waker`. Most executors in the |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Dec 19, 2018
•
|
Thanks @withoutboats for merging it. I will copy my summary of open discussion points from the PR to here: Here is a summary of open discussion points that I gathered from this thread and the original stabilization PR:
|
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Dec 19, 2018
|
And here was the latest idea for If we split the trait into a version which is safe ( pub trait ArcWake: Send + Sync {
fn wake(arc_self: &Arc<Self>);
fn into_waker(wake: Arc<Self>) -> LocalWaker where Self: Sized;
fn into_waker_ref(wake: &Arc<Self>) -> LocalWakerRef<'_> where Self: Sized;
}
pub trait ArcLocalWake: ArcWake {
unsafe fn wake_local(arc_self: &Arc<Self>);
unsafe fn into_waker_with_local_opt(wake: Arc<Self>) -> LocalWaker where Self: Sized;
unsafe fn into_waker_ref_with_local_opt(wake: &Arc<Self>) -> LocalWakerRef<'_> where Self: Sized;
}Users that want to hack together an executor in the simplest possible fashion can use I'm still not 100% sure whether Out of those I found |
| defining `Waker`s is provided, which does not require implementing a `RawWaker` | ||
| and the associated vtable manually. | ||
|
|
||
| This convience method is based around the `ArcWake` trait. An implementor of |
This comment has been minimized.
This comment has been minimized.
OddCoincidence
Dec 21, 2018
| This convience method is based around the `ArcWake` trait. An implementor of | |
| This convenience method is based around the `ArcWake` trait. An implementor of |
| these requirements are fulfilled. | ||
|
|
||
| Since many of the ownership semantics that are required here can easily be met | ||
| through a reference-counted `Waker` implementation, a convienence method for |
This comment has been minimized.
This comment has been minimized.
OddCoincidence
Dec 21, 2018
| through a reference-counted `Waker` implementation, a convienence method for | |
| through a reference-counted `Waker` implementation, a convenience method for |
aturon commentedNov 10, 2018
•
edited
This RFC proposes to stabilize the library component for the first-class
async/awaitsyntax. In particular, it would stabilize:std-level task system, i.e.std::task::*.FutureAPI, i.e.core::future::Futureandstd::future::Future.It does not propose to stabilize any of the
async/awaitsyntax itself, which will be proposed in a separate step. It also does not cover stabilization of thePinAPIs, which has already been proposed elsewhere.This is a revised and significantly slimmed down version of the earlier futures RFC, which was postponed until more experience was gained on nightly.
Rendered
RFC status
The following need to be addressed prior to stabilization: