Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upRFC: stabilize `std::task` and `std::future::Future` #2592
Conversation
aturon
added some commits
Apr 24, 2018
aturon
added
the
T-libs
label
Nov 10, 2018
This comment has been minimized.
This comment has been minimized.
|
cc @rust-lang/lang -- I haven't tagged this as T-lang, since the Lang Team already approved the async/await RFC and this is "just" about library APIs. But of course y'all should feel free to weigh in. cc @Nemo157 @MajorBreakfast @tinaun @carllerche @seanmonstar @olix0r |
This comment has been minimized.
This comment has been minimized.
|
@cramertj can you comment on |
This comment has been minimized.
This comment has been minimized.
|
cc @rust-lang/libs, please take a look! |
This comment has been minimized.
This comment has been minimized.
|
@cramertj I only briefly mentioned Fuchsia in the RFC, but it might be helpful for you/your team to leave some commentary here about your experience with the various iterations of the futures APIs. |
This comment has been minimized.
This comment has been minimized.
Although this isn't strictly relevant to the technical merits of the proposed APIs, considering the sheer scope and history of what we're talking about adding to std it seems worth asking: Are there any blog posts discussing Fuchsia's experience in more detail? This is the only part of the historical context I was completely unaware of, and I couldn't find any place that talks about it. EDIT: I swear I started typing this before aturon's last comment |
This comment has been minimized.
This comment has been minimized.
|
Thanks for putting this together. My experience with the proposed If the proposed Because of this, my plan for Tokio will be to stick on Also, most of Tokio would require the I understand the desire to drive progress forward. As I said, as far as I can tell, the proposed Edit: I should clarify, Tokio will add support for |
This comment has been minimized.
This comment has been minimized.
|
@carllerche You and I have talked about this a bunch on other channels, so I'll be repeating myself, but I want to write a response here so that everyone else is on the same page as well. There are indeed limitations with async/await today, due not so much to the feature itself as the lack of impl-trait-in-traits (or existential types) working sufficiently well (as well as, ultimately, GATs). They limit the ability to move foundational libraries to use async/await internally, and that's part of the reason we're not ready to stabilize the syntax itself yet. However, to be clear, none of these limitations connect to the The 0.1/0.3 compatibility system, which allows for fine-grained/incremental migration, ends up doing a lot to lower the stakes. For example, it's already fairly painless to write code for hyper using I think everything else you raise is discussed in the RFC as well, so I don't have more to add there! |
This comment has been minimized.
This comment has been minimized.
|
What’s the rationale for having both task and future modules? Since future only includes Future, it seems that having two modules doesn’t pull its weight. Are we expecting to move a bunch of future stuff into std in the future? |
This comment has been minimized.
This comment has been minimized.
Redrield
commented
Nov 11, 2018
|
@nrc I'm not sure but perhaps the |
This comment has been minimized.
This comment has been minimized.
ivandardi
commented
Nov 11, 2018
|
If Future is a trait then why isn't this example in the RFC
written as
? |
This comment has been minimized.
This comment has been minimized.
rpjohnst
commented
Nov 11, 2018
|
A small bikeshed: I'm not sure I totally understand all the layers of |
glaebhoerl
reviewed
Nov 11, 2018
| onto a single operating system thread. | ||
|
|
||
| To perform this cooperative scheduling we use a technique sometimes referred to | ||
| as a "trampoline". When a task would otherwise need to block waiting for some |
This comment has been minimized.
This comment has been minimized.
glaebhoerl
Nov 11, 2018
Contributor
Is this the same "trampoline" concept which is used in the context of avoiding stack overflows for recursive calls?
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
@ivandardi that's related to the async/await syntax rather than the library code provided here. The design as I understand it is that |
This comment has been minimized.
This comment has been minimized.
Yes, both modules are expected to grow substantially over time. The futures crate contains a similar module hierarchy with a much richer set of APIs. In addition, there will eventually be a |
jethrogb
referenced this pull request
Nov 11, 2018
Open
Tracking issue for async/await (RFC 2394) #50547
jethrogb
reviewed
Nov 11, 2018
| an API with greater flexibility for the cases where `Arc` is problematic. | ||
|
|
||
| In general async values are not coupled to any particular executor, so we use trait | ||
| objects to handle waking. These come in two forms: `Waker` for the general case, and |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
| Task execution always happens in the context of a `LocalWaker` that can be used to | ||
| wake the task up locally, or converted into a `Waker` that can be sent to other threads. | ||
|
|
||
| It's possible to construct a `Waker` using `From<Arc<dyn Wake>>`. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
| /// - [`Poll::Ready(val)`] with the result `val` of this future if it | ||
| /// finished successfully. | ||
| /// | ||
| /// Once a future has finished, clients should not `poll` it again. |
This comment has been minimized.
This comment has been minimized.
jethrogb
Nov 11, 2018
Contributor
No behavior is specified for when clients do do that. I think we should say something. For example, "implementors may panic".
This comment has been minimized.
This comment has been minimized.
Nemo157
Nov 11, 2018
•
Contributor
I think we could even go a bit stronger than that, “implementors should panic, but clients may not rely on this”. All async fn futures guarantee this, and I believe so do the current futures 0.3 adaptors.
I think it would also be good to mention that calling poll again must not cause memory unsafety. The current mention that it can do anything at all makes it seem like it is allowed to have undefined behaviour, but since this is not an unsafe fn the implementer cannot rely on the client’s behaviour for memory safety purposes.
| When a task returns `Poll::Ready`, the executor knows the task has completed and | ||
| can be dropped. | ||
|
|
||
| ### Waking up |
This comment has been minimized.
This comment has been minimized.
jethrogb
Nov 11, 2018
•
Contributor
It's unclear at first glance which of the code blocks starting with trait Wake/struct ExecutorInner/struct Waker are proposed for stabilization.
This comment has been minimized.
This comment has been minimized.
brain0
commented
Nov 11, 2018
|
(Probably none of you know me, yet I'd still like to offer my opinion, if that is appropriate.) It is my understanding that the purpose of having an unstable API is that the ecosystem can experiment with it to ultimately avoid stabilizing a bad API. I don't see that this has happened here. Tokio has created a shim that essentially wraps an "std future" into a "0.1 future" with the only purpose of allowing async/await style futures. Apart from that, I haven't seen any experimentation with the std::future API. If tokio (as indicated above) is not even planning to use the new API instead of the old futures 0.1 API, then stabilizing it as-is will IMO be very bad for the ecosystem. The situation for std::task is worse: From what I can see, it hasn't been used at all. The tokio shim merely provides a noop waker to satisfy std::future's poll signature, but that waker cannot be used and even panics when you try to. I'd like to see any implementation that actually uses std::task - I've been following TWIR all year and haven't found anything. I cannot see that there are comprehensive examples in the docs for std::task, or any reference implementations that show how the system is meant to be used as a whole. My information is probably incomplete, so please tell me if I am missing anything. As a side note, I started implementing an "as simple as possible" task executor based on the std APIs, just to understand them and play with them. I found the std::task stuff really complicated, and quickly realized that it still couldn't do everything I needed - most importantly, I needed to access some of the internal data in my Wake implementation, but this was not possible with LocalWaker. I would have to resort to storing information in thread-locals again, which defies the purpose of having the waker passed as an argument to poll. |
This comment has been minimized.
This comment has been minimized.
|
Tokio is not the only user of futures, Fuchsia uses this API, including writing their own executor.
… On Nov 11, 2018, at 1:28 PM, brain0 ***@***.***> wrote:
(Probably none of you know me, yet I'd still like to offer my opinion, if that is appropriate.)
It is my understanding that the purpose of having an unstable API is that the ecosystem can experiment with it to ultimately avoid stabilizing a bad API. I don't see that this has happened here. Tokio has created a shim that essentially wraps an "std future" into a "0.1 future" with the only purpose of allowing async/await style futures. Apart from that, I haven't seen any experimentation with the std::future API. If tokio (as indicated above) is not even planning to use the new API instead of the old futures 0.1 API, then stabilizing it as-is will IMO be very bad for the ecosystem.
The situation for std::task is worse: From what I can see, it hasn't been used at all. The tokio shim merely provides a noop waker to satisfy std::future's poll signature, but that waker cannot be used and even panics when you try to. I'd like to see any implementation that actually uses std::task - I've been following TWIR all year and haven't found anything. I cannot see that there are comprehensive examples in the docs for std::task, or any reference implementations that show how the system is meant to be used as a whole.
My information is probably incomplete, so please tell me if I am missing anything.
As a side note, I started implementing an "as simple as possible" task executor based on the std APIs, just to understand them and play with them. I found the std::task stuff really complicated, and quickly realized that it still couldn't do everything I needed - most importantly, I needed to access some of the internal data in my Wake implementation, but this was not possible with LocalWaker. I would have to resort to storing information in thread-locals again, which defies the purpose of having the waker passed as an argument to poll.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
This comment has been minimized.
This comment has been minimized.
brain0
commented
Nov 11, 2018
|
Right, sorry, I must have overlooked that, it's even mentioned in the RFC. Is that stuff open source? I'd love to look at it. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
brain0
commented
Nov 11, 2018
|
Someone on reddit found this: https://fuchsia.googlesource.com/garnet/+/master/public/rust/fuchsia-async/src/ (executor.rs is interesting, for example). |
This comment has been minimized.
This comment has been minimized.
|
@brain0 after the weekend, I expect that @cramertj (or others from the Fuchsia team) will write in with more extensive detail about their experiences. The RFC also went to some length to lay out the historical context. These APIs have seen plenty of use, both on top of Tokio/Hyper (through various shims) in e.g. web framework code, in embedded settings, and in custom operating systems (Fuchsia). Could you spell out your concern re: the task system? It'd be helpful to keep discussion focused on specifics if possible. |
This comment has been minimized.
This comment has been minimized.
brain0
commented
Nov 11, 2018
|
@aturon First things first: Last weekend, I decided to try out the std::task and std::future system by implementing the simplest task executor I could think of, then combine that with mio. Of course, the result would not be as feature-rich or performant as tokio, but it would demonstrate the new APIs. There were lots of little details that felt "weird" about the task system:
Sorry for the rather verbose reply, I hope it still helped you understand my concerns. |
rfcbot
removed
the
proposed-final-comment-period
label
Jan 28, 2019
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Jan 28, 2019
•
Even if we do so, we might get to 2 low-level primitives/traits. That doesn't seem to justify requiring another module. I think they both would just exist fine inside the highest level of one async create (like It is not possible to implement the current future 0.1 |
aturon
added some commits
Jan 28, 2019
added a commit
to Matthias247/rust
that referenced
this pull request
Jan 30, 2019
added a commit
to Matthias247/rust
that referenced
this pull request
Jan 30, 2019
added a commit
to Matthias247/rust
that referenced
this pull request
Jan 30, 2019
added a commit
to Matthias247/rust
that referenced
this pull request
Jan 30, 2019
This comment has been minimized.
This comment has been minimized.
|
@brain0 It doesn't seem unreasonable for Once we've provided a higher level @cramertj @Matthias247 I'd be interested in your thoughts on providing some sort of |
This comment has been minimized.
This comment has been minimized.
|
I would like the RFC to specify if executor implementations are permitted to "inline" polling a task in the Being able to do so would provide significant performance improvements in some cases, but would impose restrictions on Mutex limitations. This is something that would have to be defined up front as I do not believe it would be backwards compatible to add that ability. |
This comment has been minimized.
This comment has been minimized.
brain0
commented
Jan 30, 2019
|
@withoutboats That would help too, but all you'd get would be a |
carllerche
reviewed
Jan 30, 2019
| pointer table (vtable) which provides functions to `clone`, `wake`, and | ||
| `drop` the underlying wakeable object. | ||
|
|
||
| This mechanism is chosen in favor of trait objects since it allows for more |
This comment has been minimized.
This comment has been minimized.
carllerche
Jan 30, 2019
•
Member
This paragraph is not obviously true to me. Could a specific example be provided showing how a raw vtable works better than a trait object? I can't think of a situation in which manually defining the vtable can unlock a capability vs. a trait object.
Additionally, this API seems to provide a significant forwards compatibility hazard. With traits, additional fns can be added w/o breaking changes by including a default implementation. I do not see how this can be accomplished given the raw vtable.
This comment has been minimized.
This comment has been minimized.
Centril
Jan 31, 2019
Contributor
Additionally, this API seems to provide a significant forwards compatibility hazard. With traits, additional fns can be added w/o breaking changes by including a default implementation. I do not see how this can be accomplished given the raw vtable.
This can be solved by using #[non_exhaustive] as well as exposing a constructor function on RawWaker that takes the minimum required things (i.e. the things required today).
This comment has been minimized.
This comment has been minimized.
carllerche
Jan 31, 2019
Member
While I agree the API can be made forwards compatible, it is still unclear to me why the raw vtable is being used instead of trait objects.
If the raw vtable is used, the RFC should be updated to:
- Include examples illustrating why the raw vtable is a superior option.
- Make the API forwards compatible.
This comment has been minimized.
This comment has been minimized.
jethrogb
Jan 31, 2019
Contributor
How would you write the trait object version of RawWaker without using Box?
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
carllerche
Jan 31, 2019
Member
Assuming you are referencing the mutability issue, it could most likely be *const RawWake instead of *mut.
Otherwise, I'm not sure what wart you are talking about.
I'm not sure how manually defining a vtable (which is basically manually creating a trait object) is easier to grok than just using a trait object. Either way, the RFC states the reasoning as:
it allows for more flexible memory management schemes.
and this is the point that I am disputing.
This comment has been minimized.
This comment has been minimized.
jrobsonchase
Jan 31, 2019
This one:
/// FIXME(cramertj)
/// This method is intended to have a signature such as:
///
/// ```ignore (not-a-doctest)
/// fn drop_raw(self: *mut Self);
/// ```
///
/// Unfortunately in Rust today that signature is not object safe.
/// Nevertheless it's recommended to implement this function *as if* that
/// were its signature. As such it is not safe to call on an invalid
/// pointer, nor is the validity of the pointer guaranteed after this
/// function returns.Or is that no longer relevant and just failed to get removed?
This comment has been minimized.
This comment has been minimized.
carllerche
Jan 31, 2019
Member
@jrobsonchase IMO that wart seems relatively minor in context.
The forwards compatibility issue is real IMO. @Centril called out future proofing this. I assume that would require the constructor to have const fns? And, every time there is a change, creating a new constructor?
This comment has been minimized.
This comment has been minimized.
Matthias247
Feb 1, 2019
One other important conceptual difference is that the raw vtable doesn't imply that much that we are talking about a real object. E.g. current executors/wakeups are implement through various mechanisms:
- As global functions
- As member functions on refcounted objects
- As thread-local functions.
All of those could be modeled with UnsafeWake too, but the casts of that some things into that trait were not really a natural fit. It seemed like mostly people with good experience on how trait objects look like in memory could implement it.
With traits, additional fns can be added w/o breaking changes by including a default implementation. I do not see how this can be accomplished given the raw vtable.
I don't see compatibility issues. Either the default implementation can be added through new constructors, or the relevant field on RawWakerVTable simply stays null, and Waker calls the default method when the thing is not populated.
In the C world this approach is even used to extend things in an ABI compatible way - which is something we even don't require at the moment.
This comment has been minimized.
This comment has been minimized.
jethrogb
Feb 1, 2019
Contributor
I think you can do all of this with trait objects, you just need a zero-sized type in some of those cases.
This comment has been minimized.
This comment has been minimized.
I agree that adding new fields would mean they'd need to be optional, but that seems much better than currently, which is that nothing new can be added ever. As an example, Kotlin's |
This comment has been minimized.
This comment has been minimized.
Can you elaborate on this? I'm not able to grasp the request. |
This comment has been minimized.
This comment has been minimized.
I don't follow this. Ultimately the definition of |
This comment has been minimized.
This comment has been minimized.
tikue
commented
Jan 31, 2019
I believe @carllerche is imagining an implementation that looks like a callback. Instead of appending to a task queue, it immediately calls |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
@carllerche Got it, thanks! That does seem important to clarify |
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Jan 31, 2019
•
A strong no from my point of view. The ability to call a continuation inline (instead of scheduling it) is one of the biggest sources of deadlocks, call stack overflows and other reentrancy isssues in things like C# Tasks, which allow for that. I am totally aware of potential performance gains. But very scared around hidden issues, and non deterministic behavior (if one executor is swapped out for another which makes that kind of decisions). |
This comment has been minimized.
This comment has been minimized.
|
@Matthias247 fwiw, I have only inlined the poll w/ a depth of 1. So, there is no risk of overflow. |
This comment has been minimized.
This comment has been minimized.
Presumably this would include the restriction that the task being awoken cannot be a task that is currently being polled (e.g. a task polling itself)? If so, I personally have no strong objection to this. However, it does seem like something that would require very clear documention, such as noting to never I checked and I don't personally couldn't find any code that couldn't be adapted to follow this pattern, but I certainly have code that violates these rules today. For the community: does anyone know of code that needs to call |
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Feb 1, 2019
|
@carllerche Ok, that's a reasonable guard against stack overflows. However we still would face the other issues.
Yeah, the futures-intrusive stuff I've worked on unfortunately requires it. Because outside of the lock, the field which holds the |
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Feb 1, 2019
•
I don't see a benefit of that. You can downcast inside the functions that get stored in the vtable, which is what will typically happen. Adding any other query functions to |
This comment has been minimized.
This comment has been minimized.
|
Interesting question just raised on |
added a commit
to Matthias247/rust
that referenced
this pull request
Feb 3, 2019
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Feb 3, 2019
|
@Nemo157 I don't know exactly about that one, but encountered a related question: If APIs return concrete |
This comment has been minimized.
This comment has been minimized.
|
@Matthias247 luckily
|
This comment has been minimized.
This comment has been minimized.
Matthias247
commented
Feb 3, 2019
|
@Nemo157 Oh cool, I totally missed that. Last time I tried it didn't work. |
aturon commentedNov 10, 2018
•
edited by cramertj
This RFC proposes to stabilize the library component for the first-class
async/awaitsyntax. In particular, it would stabilize:std-level task system, i.e.std::task::*.FutureAPI, i.e.core::future::Futureandstd::future::Future.It does not propose to stabilize any of the
async/awaitsyntax itself, which will be proposed in a separate step. It also does not cover stabilization of thePinAPIs, which has already been proposed elsewhere.This is a revised and significantly slimmed down version of the earlier futures RFC, which was postponed until more experience was gained on nightly.
Rendered
RFC status
The following need to be addressed prior to stabilization: