Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upRFC: add futures to libcore #2395
Conversation
aturon
added
the
T-libs
label
Apr 6, 2018
aturon
referenced this pull request
Apr 6, 2018
Merged
async/await notation for ergonomic asynchronous IO #2394
jonas-schievink
reviewed
Apr 6, 2018
| } | ||
| /// Check whether this error is the `shutdown` error. | ||
| pub fn is_shutdown() -> bool { |
This comment has been minimized.
This comment has been minimized.
aturon
added some commits
Apr 6, 2018
eddyb
reviewed
Apr 6, 2018
| ```rust | ||
| pub trait Future { | ||
| /// The type of value produced on completion. | ||
| type Item; |
This comment has been minimized.
This comment has been minimized.
eddyb
Apr 6, 2018
Member
We were discussing this last week: I believe Output is more in line here with existing conventions (e.g. Fn traits, operator overloading traits), and Item would be more appropriate for Stream (by association with Iterator).
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
withoutboats
Apr 9, 2018
Contributor
The main reason to use Item, from my perspective, is that it is 2 characters shorter than Output. For Fn traits this doesn't matter, they have -> syntax, but Future, like Iterator, will often be bound Future<Item = ?>.
I don't have a strong opinion about this choice.
This comment has been minimized.
This comment has been minimized.
Centril
Apr 9, 2018
•
Contributor
@withoutboats We could do some sort of Future() -> R deal for all traits of the form:
trait TraitName {
type Output;
// other stuff does not matter...
}This might have a knock on benefit for people who define their own "Fn-like" traits.
Personally I think 2 more characters for a more apt name is the right way to go in this instance.
This comment has been minimized.
This comment has been minimized.
Diggsey
Apr 9, 2018
Contributor
I like the Fn => Future, Iterator => Stream correspondence, and I think using Output will avoid confusion in places where futures/streams are being mixed.
This comment has been minimized.
This comment has been minimized.
carllerche
Apr 9, 2018
Member
@withoutboats I am in strong support of keeping it Item for the reason you mentioned. As someone who works w/ futures all day, I type Item constantly. The shorter name makes a big difference here.
The fn analogy doesn't make sense. a future is not a function. It is a value that will complete in the future.
Item matches Iterator which is closer to what a future is.
This comment has been minimized.
This comment has been minimized.
Diggsey
Apr 9, 2018
Contributor
The fn analogy doesn't make sense. a future is not a function. It is a value that will complete in the future.
The analogy does make sense: both a Fn and a Future produce a single Output. Both an Iterator and a Stream produce multiple Items
This comment has been minimized.
This comment has been minimized.
eddyb
Apr 10, 2018
Member
To make my position a bit clearer: I think Item made more sense when we had something like Result<Self::Item, Self::Error>, since it was "the item in the result".
This comment has been minimized.
This comment has been minimized.
cramertj
Apr 17, 2018
•
Member
For Fn traits this doesn't matter, they have -> syntax, but Future, like Iterator, will often be bound Future<Item = ?>.
If we wanted to be really wild, we could make -> sugar work for the Future trait:
fn foo() -> impl Future -> u32 {
println!("Foo called");
async {
println!("Foo polled");
5
}
}
fn serve(f: impl FnMut(Request) -> impl Future -> Response) {
// serve http requests by calling `f`
....
}
aidanhs
reviewed
Apr 6, 2018
| /// of data on a socket, then the task is recorded so that when data arrives, | ||
| /// it is woken up (via `cx.waker()`). Once a task has been woken up, | ||
| /// it should attempt to `poll` the future again, which may or may not | ||
| /// produce a final value at that time. |
This comment has been minimized.
This comment has been minimized.
aidanhs
Apr 6, 2018
Member
I'd love to have it made explicit that it's not an error to poll a future even before a registered 'interest' has caused a task wakeup.
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
What would be the use case for polling a future multiple times like that?
(Intuitively, I would expect that assuming that it doesn't happen allows for some implementation simplification, e.g. one doesn't need to guard against multiple wakeups being scheduled for a single I/O event.)
This comment has been minimized.
This comment has been minimized.
Nemo157
Apr 7, 2018
Contributor
The main one I can think of is implementing select and join, you don’t know which of your sub-futures caused the wakeup so you have to poll them all to see if any are ready yet.
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
I would think that such O(N) "poll all the inner futures" behaviour would be considered problematic. Hasn't there been some work to help a future figure out the cause of a wakeup?
This comment has been minimized.
This comment has been minimized.
Nemo157
Apr 7, 2018
Contributor
IMO select and join should only be used for a very small number of futures, e.g. selecting a single future with a timeout to cancel it. If you have many sub-parts then these should be spawned off to separate tasks (and so be woken separately by the Executor) and a more efficient way to notify when all/one is complete should be used.
This comment has been minimized.
This comment has been minimized.
aidanhs
Apr 7, 2018
Member
As noted, select and join are the ones that spring to mind. My main goal is to make it very clear whether or not it's permitted (and the recommended way of doing select if not - I can think of ways, but they're not great) as this could cause a nasty ecosystem split if some authors make the simplifying assumptions @HadrienG2 mentions and others require otherwise.
This comment was marked as resolved.
This comment was marked as resolved.
This comment has been minimized.
This comment has been minimized.
sbstp
commented
Apr 6, 2018
|
Depending on how likely I guess that beyond the trait, a particular executor could provide this method if it does not have a limit on the number of tasks and such. |
fbstj
reviewed
Apr 6, 2018
| where E: BoxExecutor; | ||
| /// Get the `Waker` associated with the current task. | ||
| pub fn waker(&self) -> &Waker |
This comment has been minimized.
This comment has been minimized.
fbstj
reviewed
Apr 6, 2018
| // this impl is in `std` only: | ||
| impl From<Box<dyn Future<Item = ()> + Send>> for Task { | ||
| pub fn from_box(task: ) -> Task; |
This comment has been minimized.
This comment has been minimized.
fbstj
Apr 6, 2018
missing argument type? or maybe an indication that the argument type isn't interesting (aka _) ?
This comment has been minimized.
This comment has been minimized.
|
@sbstp Note, such a convenience is provided by the |
fbstj
reviewed
Apr 6, 2018
| impl<'a> Context<'a> { | ||
| /// Note: this signature is future-proofed for `E: Executor` later. | ||
| pub fn new<E>(waker: &'a Waker, executor: &'a mut E) -> Context<'a> |
This comment has been minimized.
This comment has been minimized.
fbstj
Apr 6, 2018
I don't understand how this is the case, if this is stabilised with argument that's going to change, how is that future proof? is it because Executor will be a supertype/parent of BoxExecutor, thus the type is only being opened up to things it couldn't have been before?
This comment has been minimized.
This comment has been minimized.
aturon
Apr 6, 2018
Author
Member
Along those lines, yes -- because there will be a blanket impl from BoxExecutor to Executor.
This comment has been minimized.
This comment has been minimized.
fbstj
Apr 6, 2018
thanks, I was confused on my first read-thru, but realising that it's an argument that will effectively be a subtype of the eventual trait.
This comment has been minimized.
This comment has been minimized.
|
One note concerning vetting and stabilization, which is very important: the basic shape of the futures API changes (regarding the task system) have been vetted for a couple of months on 0.2 preview releases, and integrations. The pinning APIs have been quasi-formally checked. And the proposed stabilization plan here will give us an additional 6-7.5 months of experience with these APIs before they are shipped on stable. |
bugaevc
reviewed
Apr 6, 2018
| /// Represents that a value is not ready yet. | ||
| /// | ||
| /// When a function returns `Pending`, the function *must* also |
This comment has been minimized.
This comment has been minimized.
bugaevc
Apr 6, 2018
An alternative design that should be at least mentioned in the alternatives section is to make Poll::Pending wrap some "I have scheduled my task!" token, which a future can obtain either from Poll::Pending being returned from one of its children futures or by explicitly constructing such a token (using some method/API whose naming would make it painfully clear it's not okay to just call it without actually scheduling the task to be woken up by some external mechanism), yay learnability, yay typesystem. This was proposed (and, I assume, rejected) before, so it should make sense to link to that discussion from this RFC.
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
•
I agree, it is really nice when strong "you have been warned" wording in comments can be replaced with a little type hack which ensures that the API simply cannot be used incorrectly.
This comment has been minimized.
This comment has been minimized.
tanriol
Apr 7, 2018
Well, incorrect usage is still possible if you have multiple children futures and forget to poll some of them, but seems much less likely.
This comment has been minimized.
This comment has been minimized.
sbstp
commented
Apr 6, 2018
|
@aturon |
HadrienG2
reviewed
Apr 6, 2018
HadrienG2 left a comment
|
Although my comments may sound like a very confused "Hey, what's going on?", they are only about making sure that the motivations and design of whatever gets engraved in RFC stone is made crystal clear. I'm actually pleasantly surprised by how far Rust's futures have progressed towards being something that one can reasonably describe on a ~1k line budget. Good job! |
| /// the associated task onto this queue. | ||
| fn wake(&Arc<self>); | ||
| } | ||
| ``` |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 6, 2018
You may want to clarify this part a bit more. At least two points are currently unclear for the Tokio-uninitiated:
- The description "Executors do so by implementing this trait." suggests that the Wake trait is directly implemented by an Executor impl. However, it must actually be implemented by a proxy type spawned by the executor, otherwise "wake()" wouldn't be able to tell which task must be awoken.
- Along the way, a word or two could probably be added on the rationale for this complex self type (Why is a shared reference to an Arc needed? Why is &self not enough?).
This comment has been minimized.
This comment has been minimized.
lambda
Apr 7, 2018
Contributor
Along the way, a word or two could probably be added on the rationale for this complex self type (Why is a shared reference to an Arc needed? Why is &self not enough?).
This is something I was wondering.
This RFC describes what the proposed API is, but it doesn't do a very good job of explaining why the API is the way it is. Why do we need these Arcs? It describes an alternative with an unsafe interface that acts essentially like an Arc with shared ownership through cloning, and mentions the ArcObj alternative, but it doesn't really spell out why you need this Arc-like behavior in the first place.
I think it would help to describe a little bit more about exactly how this executor/waker interaction is supposed to work; possibly with an example of a very basic, simple executor which is a single-threaded event loop, and if that doesn't fully clarify why you need Arc then describe why it is that more powerful thread-pool based executors need this Arc-like behavior.
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
•
After reading the RFC a couple more times, I think I have finally reverse-engineered in my mind why one would want to use an Arc here.
- Context only takes a shared reference to a Waker, so all it can provide to the executing future is a shared reference.
- But the future may potentially need to be awoken from a different, asynchronous code path, in which case there is a need for an owned Waker value that can be moved around.
- There should thus be a way to clone the Waker and potentially send the clone to a different thread, ergo, we need an Arc.
Of those, only point 1 is somewhat controversial. We could have imagined an alternate design in which the Context owns a Waker, and one can get it by consuming the Context (after all, do you still need that context if you are going to suspend the future?).
But this means that a new Waker must be created every time a future is polled, even if it is unused, and atomic increments/decrements are not cheap so we don't want to do that. Alternatively, we might provide Context with a way to create a Waker, but that would ultimately get quite complicated.
So considering that there are use cases for futures that can be awoken from multiple places (anything that combines multiple futures into one: join(), select()...), an Arc was probably deemed to be the most appropriate implementation.
Definitely not obvious, and still does not explain why this implementation needs to leak in the interface all the way to the self type. I agree with you that this RFC would benefit from explaining a bit more the "why", rather than mostly the "what".
| /// This function is unsafe to call because it's asserting the `UnsafeWake` | ||
| /// value is in a consistent state, i.e. hasn't been dropped | ||
| unsafe fn wake(self: *mut self); | ||
| } |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 6, 2018
Again, please clarify to the tokio-uninitiated why raw pointers are needed here and more usual self types would have been ineffective.
| pub fn wake(&self); | ||
| } | ||
| impl Clone for Waker { .. } |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 6, 2018
If I understand the purpose of Waker, you will probably want to assert that it is Send somewhere for completeness (or just mention it in the documentation).
| /// means that `spawn` is likely, but not guaranteed, to yield an error. | ||
| fn status(&self) -> Result<(), SpawnError> { | ||
| Ok(()) | ||
| } |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 6, 2018
This API seems intrinsically racey, as its careful doc admits. What is its intended use? To avoid moving Tasks into an Executor without good reason?
| pub fn is_shutdown(&self) -> bool; | ||
| // additional error variants added over time... | ||
| } |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 6, 2018
•
With the current definition of SpawnError, failing to spawn a Task because an Executor with a bounded queue is overloaded means that said Task is lost forever. Is that intended? If not, you may want to provide a way to retrieve the Task which failed to spawn.
| ``` | ||
| As stated in the doc comment, the expectation is that all executors will wrap | ||
| their task execution within an `enter` to detect inadvertent nesting. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
seanmonstar
Apr 7, 2018
Contributor
This API grew from our experience of users using a blocking executor inside a future that is itself being executed on some blocking executor. In most cases, the app would just deadlock, since the future.wait() would call park the thread until it was ready, when the only way it could know it was ready was epoll on the same thread.
If we could detect blocking operations of all kinds while in the context of an Enter guard, that'd be amazing!
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
•
Here is an attempt at minimally phrasing this rationale in the doc comments or the RFC.
"Doing so ensures that executors aren't accidentally invoked in a nested fashion. When that happens, the inner executor can block waiting for an event that can only be triggered by the outer executor, leading to a deadlock."
(I would also love for OSes to stop overusing blocking APIs so much. When even reading from or writing to a memory address is a potentially blocking operation, it gets kind of ridiculous...)
|
|
||
| ```rust | ||
| trait Executor { | ||
| fn spawn(&mut self, task: Future<Item = ()> + Send) -> Result<(), SpawnError>; |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 6, 2018
Since you are talking about taking dyn by value, shouldn't there be a "dyn" keyword here? (In general, use of this keyword is somewhat inconsistent in this RFC)
| /// | ||
| /// This method will panic if the default executor is unable to spawn. | ||
| /// To handle executor errors, use the `executor` method instead. | ||
| pub fn spawn(&mut self, f: impl Future<Item = ()> + 'static + Send); |
This comment has been minimized.
This comment has been minimized.
| /// threads or event loops. If it is known ahead of time that a call to | ||
| /// `poll` may end up taking awhile, the work should be offloaded to a | ||
| /// thread pool (or something similar) to ensure that `poll` can return | ||
| /// quickly. |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 6, 2018
•
You may or may not want to clarify that a Future-aware thread pool is needed here. Most thread pool libraries currently available on crates.io either do not provide a way to synchronize with the end of the computation or do so by implicitly blocking, neither of which is appropriate here. Only some thread pools will be usable here.
This comment has been minimized.
This comment has been minimized.
seanmonstar
Apr 7, 2018
Contributor
It should be possible to offload blocking work to any kind of thread pool, and "synchronizing" the result back to a future, using something like futures current oneshot channel:
You send the arguments of your computation, and a oneshot::Sender<ReturnType>, to the other thread. Once computation is completed, you'd call tx.send(ret). In your Future, you'd just await the result from the oneshot::Receiver.
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
You're right, this can be done with any kind of fire-and-forget API (and I think this is how current futures-aware thread pools are implemented). It may not be immediately obvious, however, and will not work with blocking APIs (scoped_threadpool, rayon...).
Again, I'm not 100% sure if clarifying this part is needed, just thought it would be another possible area for design documentation improvements.
| [guide-level-explanation]: #guide-level-explanation | ||
|
|
||
| The `Future` trait represents an *asynchronous* computation that may eventually | ||
| produce a final value, but don't have to block the current thread to do so. |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 6, 2018
•
The single most common futures-rs beginner mistake is to assume that a task is spawned in the background as soon as a future is created. So right from the introduction, we may want to stress the fact that Rust's futures are executed lazily, and not eagerly as in most other futures implementations.
Here is a possible wording tweak that subtly stresses this point: "...an asynchronous and lazy computation...".
Also, there is a small don't -> doesn't typo in there.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
I definitely think so. I cannot think of any valid use case for creating a future and dropping it afterwards without having done anything else with it, and it is a common mistake.
seanmonstar
reviewed
Apr 7, 2018
|
|
||
| ## Prelude | ||
|
|
||
| The `Future` and `FutureRes` traits are added to the prelude. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
carllerche
Apr 7, 2018
Member
At least, could they be moved to std::futures::prelude (or whatever module they will be in)?
This comment has been minimized.
This comment has been minimized.
jwilm
Apr 7, 2018
I have reservations about this. Could this section be extended with some rationale?
This comment has been minimized.
This comment has been minimized.
Centril
Apr 7, 2018
•
Contributor
To provide some context for prelude-or-not evaluation (I don't have an opinion here yet), here's a snippet from std::prelude:
The prelude is the list of things that Rust automatically imports into every Rust program. It's kept as small as possible, and is focused on things, particularly traits, which are used in almost every single Rust program.
So we should decide whether Futures are "used in almost every single Rust program".
This comment has been minimized.
This comment has been minimized.
carllerche
Apr 7, 2018
Member
I would say that most Rust programs would not need Future or FutureRes. Not even all async programs will need it (if you use mio directly, you have no need for Future).
The worse aspect is that I anticipate including this in prelude to cause problems, as I described in this comment.
This comment has been minimized.
This comment has been minimized.
|
Given that futures 0.2 has already been released and has not had any significant usage yet, it seems very rushed to include this in edit: Removed "I am not going to get involved in the details of this RFC" because that clearly wasn't true. |
seanmonstar
reviewed
Apr 7, 2018
| } | ||
| ``` | ||
|
|
||
| We need the executor trait to be usable as a trait object, which is why `Task` |
This comment has been minimized.
This comment has been minimized.
seanmonstar
Apr 7, 2018
Contributor
It seems like the additional information about Task isn't actually in the no_std section. Could it be expanded upon, and why it exists instead of Box<Future>? (I realize the reason is because there is no Box in no_std.)
carllerche
reviewed
Apr 7, 2018
| subtrait for that case, equipped with some additional adapters: | ||
|
|
||
| ```rust | ||
| trait FutureRes<T, E>: Future<Item = Result<T, E>> { |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Centril
Apr 7, 2018
Contributor
I think it should; We don't use these kinds of short forms elsewhere in libstd, right?
This comment has been minimized.
This comment has been minimized.
durka
Apr 8, 2018
Contributor
FutureResult sounds like something that comes out of a Future (like LockResult).
BikeShed::new("FallibleFuture").build()
This comment has been minimized.
This comment has been minimized.
seanmonstar
reviewed
Apr 7, 2018
| subtrait for that case, equipped with some additional adapters: | ||
|
|
||
| ```rust | ||
| trait FutureRes<T, E>: Future<Item = Result<T, E>> { |
This comment has been minimized.
This comment has been minimized.
seanmonstar
Apr 7, 2018
Contributor
Would trait aliases be a possible way of doing this, instead of a separate trait? Looks like it continues to see progress.
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
Doesn't this mean that every corresponding method would have to be added to the Future trait with a where Self: FutureRes bound? Or is there a cleaner way to do this with trait aliases?
This comment has been minimized.
This comment has been minimized.
|
To provide more context on my reservation for adding By removing the associated Now, one way to escape situations where generics get out of hand is to define convenience traits w/ a bunch of associated types that are more specific + a blanket impl for the more generic trait that got out of hand. For example, in advanced cases, working w/ tower's I would anticipate that, if Ideally as @seanmonstar mentioned, something like trait aliases would be used instead of As a simple example, lets say that one is working with futures that are always trait MyFutRes<T>: Future<Item = Result<T, MyError>> {
fn map<U>(self, f: impl FnOnce(T) -> U) -> impl MyFutRes<U>
{ .. }
}The problem is that this convenience trait would not be usable (unless I'm missing something) if So, I ask that |
This comment has been minimized.
This comment has been minimized.
Could you elaborate more on this? I haven't personally seen anything exercising the 0.2 changes at a significant level. I would be eager to be able to have some samples to dig into showing how the API changes shake out in practice. |
HadrienG2
reviewed
Apr 7, 2018
| where E: BoxExecutor; | ||
| /// Get the `Waker` associated with the current task. | ||
| pub fn waker(&self) -> &Waker; |
This comment has been minimized.
This comment has been minimized.
HadrienG2
Apr 7, 2018
•
Wakers are about waking up futures that fell asleep, normally from a different stack frame than the one in which the future was polled. So a &Waker reference that is tied to the stack frame on which Future::poll() is being called does not sound very useful, and in every case that I can think of, people will want to call context.waker().clone().
If that is the general use case, shouldn't waker() just return an owned Waker directly?
This comment has been minimized.
This comment has been minimized.
|
I think everyone who wrote futures using code immediately saw major ergonomic issues. Improving this reasonably is a really good idea. This RFC seems like a good idea in principle, given that: a) the async programming pattern is important for a systems programming language Doubling down on this "add X to libcore" path is not a good idea though. E.g. I don't think that the failure crate should be added, or bindgen or something. Those are nice crates and do really well as crates, so no need to add them to the language. Also, I think that @carllerche 's points on the schedule should be taken seriously. We shouldn't stabilize something that hasn't been used widely and couldn't mature. Futures shouldn't become the macros of the 2018 edition, stabilized in a rush to get something out of the window but with flaws left and right. |
aturon
added some commits
Apr 20, 2018
This comment has been minimized.
This comment has been minimized.
|
RFC updates! As per discussion above, I've made several updates:
I believe that the proposal now reflects the current rough consensus on thread, modulo two things: some bikeshedding, and some remaining discomfort around @HadrienG2, I still intend on providing fuller rationale in the places you asked for it, but haven't prioritized this since most of them are about existing aspects of the 0.2 design. Finally, I want to note that the |
This comment has been minimized.
This comment has been minimized.
|
@seanmonstar Here's a belated reply to your concerns around
The way I think about it is more analogously to the distinction between
That last statement is incorrect. This has been said elsewhere on thread, but to reiterate: the However, there is another case to consider, which is when you're writing a future that doesn't use internal borrows itself, but does contain arbitrary user futures. I talk about that case in detail in the updated RFC (and we've also talked about it on IRC). TLDR: there's a trivial safe option that gets you back to the It might be worth re-reading @withoutboats's blog series about pinning if any of this is unclear.
Again to clarify: it is not the case that all futures must be pinned.
See @Nemo157's comment proposing the canonical solutions.
This basically boils down to the same story as above: if all the futures are yours, they can all implement
The use of generators and the issues around pinning are separate. In particular, for both generators and
For It's an open question what this will mean in terms of traits, but the situation with iterators is very substantially different from that of async fn, both in terms of what defaults are appropriate, and of what kinds of compositions are desired. I don't think the two design spaces have much insight to offer each other, sadly.
One thing I wanted to mention: it seems quite plausible to offer an optimized form of "pin safety", where if I hope the above helps allay your concerns. But I also think that the best thing we can do here is write code! I've already made significant progress on the futures 0.3 branch and have a PR up for the combinators. For the latter, I made no attempt to build safe abstractions around pinning, but it's already very clear that there are just a couple of basic patterns we can likely encapsulate. My hope is to publish a futures 0.3-alpha in the very near future -- most likely by the end of next week. As outlined in the RFC, that version would only work on nightly Rust, but should otherwise give an accurate impression of what these APIs are like to work with. Ideally, with cooperation from the Tokio and Hyper maintainers, we can land 0.3-alpha integration behind a flag (or through some other means) and begin to get real-world experiences with these questions ASAP. I think we should, at the same time, move aggressively to land both this RFC and its companion; as I explained in the previous comment, the motivation is to get as much experimental code in place as we can, so that we can support stabilization discussions down the road with more data. |
seanmonstar
reviewed
Apr 21, 2018
| literally the `async` version of the usual `read`: | ||
| ```rust | ||
| async fn read(&mut self, buf: &mut [u8]) -> io::Result<usize>; |
This comment has been minimized.
This comment has been minimized.
seanmonstar
Apr 21, 2018
Contributor
Perhaps a more specific example can be included, one that cannot be implemented today? An async function returning a future that borrows both self and buf already is possible, for example (beta just used for impl Future).
This comment has been minimized.
This comment has been minimized.
cramertj
Apr 21, 2018
Member
This specific function can't actually be used today (at least not without lots of unsafe code), since the future return type is bound to the lifetimes of the input types.
This comment has been minimized.
This comment has been minimized.
seanmonstar
Apr 21, 2018
Contributor
Hm, can you expand on that? The link shows the function actually is used, being put in the block_on.
Do you mean you can't really return this future from another function, since the TCP stream and buffer would be dropped? That's true, but the same problem exists with async fn.
This comment has been minimized.
This comment has been minimized.
|
First of all, let me clarify: I really appreciate the thought and work that's been put into this so far. I don't mean for my criticisms to be taken personally, and regret that I feel I need to dig into the problems I see with such detail. Everyone gets a Pin
This statement is also incorrect... I did end up meaning two things with my statement, so I'll clarify.
It doesn't really matter that
Indeed, it is. You cannot call If
I'd read the series a few times, which is what lead me to my concerns originally. To be fair, I really did go back and read all 6 entries before writing this, in case I had missed something. My concerns remain the same. Select
I went back and checked to see if I had missed a solution, and found that there still not a satisfactory one. There's two options:
Select is not just used for timeouts. I can think of a few places in just hyper that uses
Select would also be used often when making We still could use the solution for Generators/Iterators
I personally haven't seen a ton of evidence that this is the case. But even with that evidence, it seems like the actual point I made isn't addressed. The problem does exist for trying to implement The implication sounds like Pin dilutes
|
MajorBreakfast
reviewed
Apr 21, 2018
| @@ -808,4 +905,4 @@ model. | |||
| # Unresolved questions | |||
| [unresolved]: #unresolved-questions | |||
|
|
|||
| None at present | |||
| - Final name for `FutureResult`. | |||
This comment has been minimized.
This comment has been minimized.
MajorBreakfast
Apr 21, 2018
•
Contributor
Names suggested so far:
FutureResult- Pro: Word Future is in front, same word oder as
IterandIterMut
- Pro: Word Future is in front, same word oder as
ResultFuture- Pro: The word
Futureat the end emphasizes that everyResultFutureis aFuture
- Pro: The word
FallibleFuture- Pro: The word
Futureat the end emphasizes that everyFallibleFutureis aFuture - Pro: Alliteration
- Con: Not as descriptive: No indication that the
Resulttype is involved
- Pro: The word
This comment has been minimized.
This comment has been minimized.
jimmycuadra
Apr 22, 2018
Another option: TryFuture. In the same way that TryFrom is the fallible case of From, and the Try trait is the generalization of "a thing that might fail."
This comment has been minimized.
This comment has been minimized.
FWIW, this is what @canndrew was talking about above. If we had a reference type which takes ownership of its referent, an With respect to the bikeshedding: I want to refloat my suggestion from before of calling the trait |
This comment has been minimized.
This comment has been minimized.
In what way does this differ from stack pinning, it's still just a reference so can't be returned out of the stack frame that created it, correct? Here's an extension of my earlier stack_pin(ImmovableCountdown { count: 3, result: "foo" }, |left| {
let mut bar = SelectUnpin {
left: Some(left),
right: Some(Countdown { count: 2, result: "bar" }),
};
println!("{:?}", Pin::new(&mut bar).poll());
println!("{:?}", Pin::new(&mut bar).poll());
});results in
by pre-pinning an immovable future you can pass it in and receive it via reference (a There's no standard stack pinning api available yet, but I'm fairly confident that something will be provided. (One major blocker is that a closure based api wouldn't work in |
This comment has been minimized.
This comment has been minimized.
|
I have to agree with @seanmonstar re the amount of additional unsafe blocks. It should not be a requirement to add unsafe to combinators period. Having to do so is a huge red flag to me. |
This comment has been minimized.
This comment has been minimized.
rpjohnst
commented
Apr 21, 2018
|
You only need to add |
This comment has been minimized.
This comment has been minimized.
|
There should be an “unused_must_use” warning for futures like it exists for This helps in normal code that uses futures, but also with forgotten async fn get_num() -> i32 () { 42 }
async fn my_fn() {
get_num(); // Warning: "unused_must_use"
await!(get_num()); // Correct
get_num().is_positive(); // Error: Method does not exist
await!(get_num()).is_positive(); // Correct
}@rpjohnst pointed out in https://internals.rust-lang.org/t/explicit-future-construction-implicit-await/7344/66 that lazy futures and explicit await have this problem: If the user calls a function, but forgets the |
xfix
reviewed
Apr 23, 2018
|
|
||
| ```rust | ||
| // A future that returns a `Result` | ||
| trait FutureResult: Future<Output = Result<Self::Item, Self::Error>> { |
This comment has been minimized.
This comment has been minimized.
xfix
Apr 23, 2018
Contributor
How would that work without causing error[E0391]: cyclic dependency detected error message?
This comment has been minimized.
This comment has been minimized.
BigBigos
commented
Apr 23, 2018
|
If I understand correctly, the only real need for If I look at
Then:
but:
doesn't work. This asymmetry between references and If
or to be more similar to
I understand that implementing that would require compiler changes, which we can't afford to depend on, though. |
This comment has been minimized.
This comment has been minimized.
The "totally new capability" in this case is upgrading the futures crate. i.e. if any existing libraries that currently depends on This is because the foundational trait requires In order to avoid going from zero |
This comment has been minimized.
This comment has been minimized.
rpjohnst
commented
Apr 23, 2018
Not if, as @aturon claims, "it's already very clear that there are just a couple of basic patterns we can likely encapsulate." Unless he's completely wrong somehow, it should be possible to a) continue using zero |
This comment has been minimized.
This comment has been minimized.
|
@seanmonstar Thanks much for the detailed reply!
Just since it's worth saying: this is what the RFC process is for! Digging into these details is ultimately what will help us synthesize a solution that works well for everyone, and I appreciate the time you're putting into it. The ideas I discuss below were generated specifically by push-back from you and others. Rather than respond point-by-point, I want to propose a different approach that meets the same end goal (of providing support for async/await with borrowing), but with a more conservative take on the library ramifications. Supporting async/await, take 2The basic idea is simple, and is along the lines of comments @seanmonstar has made before:
Only Libraries can use either trait, and in particular can upgrade to futures 0.3 without moving to The key point is that the more efficient pin-based APIs become opt in rather than opt out as they are in this RFC (by using I suspect that, in the long run, we will develop tools that make working with The story for libcoreTake the trait Async {
type Output;
fn poll(self: Pin<Self>, cx: &mut task::Context) -> Poll<Self::Output>;
}
trait IntoAsync {
type Async: Async;
fn into_async(self) -> Self::Async;
}together with the The story for the futures crateThe 0.3 version would contain a Under the nightly flag, it would further:
As such, 0.3 would immediately work on the stable Rust compiler, and usage would be almost identical to 0.2. However, once async/await support lands in the compiler, users of nightly would be able to use that support while interoperating with the futures 0.3 ecosystem. This reduces the pressure to stabilize The long runUltimately, we may find either that movable futures are common enough that we want to keep Thoughts?I'm curious to hear what folks think about these ideas. Personally, I'm excited at the prospect of being able to start moving the ecosystem toward 0.3 in the near future in parallel with further explorations of pinning and async/await. |
This comment has been minimized.
This comment has been minimized.
|
This comment has been minimized.
This comment has been minimized.
This is a very similar design to what we have today, just removing the
This is the situation I’m in with futures 0.2 at the moment. I’m having to define my own combinators for ——
That would make the transition point where —— Overall I like this as a next step. It’s basically the smallest possible change from futures 0.2 required to bring It could also be used as a transition strategy to avoid having to do a massive upgrade from 0.2 to 0.3; instead, once some experimentation has happened and the final This could actually be done with 0.2 (other than the minor renamings mentioned). All the current |
This comment has been minimized.
This comment has been minimized.
@Nemo157 You're right. I think this can be mitigated through clear documentation, though. It's a tradeoff either way because not adding pinning to the API means there's going to be another breaking change to the futures crate in the future. Another thing that I've just thought of: @aturon's proposal postpones the introduction of |
This comment has been minimized.
This comment has been minimized.
|
@MajorBreakfast @Nemo157 thanks both for your (as usual!) insightful commentary!
Yes. For now, we can do this by providing two separate ways of doing the conversion (we'll have to bikeshed which one should be the "default"). Later on, we can do this via specialization using just a single method.
Yes -- sorry I wasn't more clear about this. My plan was to provide combinators on both. For cases like
There are a few considerations here. Most important: this proposal makes it possible to get a futures 0.3 release that works on stable Rust* immediately, whereas with the RFC we have to stabilize Also, it's possible to stabilize async/await with this proposal without stabilizing
No; the ecosystem would bound by
Yep. That said, the
Possibly, but I agree with @Nemo157 that that probably introduces more confusion than it's worth. FWIW, I think you can see
Exactly!
Yes!
That's correct. However, given that 0.2 hasn't had significant take-up yet, some of the renamings involved are pretty fundamental, and we may want to make other changes to combinators, it seems prudent to just push forward to 0.3
It could, yes, and maybe should. I was feeling a bit wary of introducing a whole bunch of new traits, especially if in the long run we think If reception continues to be positive, I will close this RFC and open a new one built around this new proposal. Speak up if you strongly object to that! (And note that the end result of both approaches is basically the same, but the new proposal has a gentler migration story.) |
This comment has been minimized.
This comment has been minimized.
|
@aturon Your last sketch is roughly what I had hoped would happen. I am looking forward to the next RFC. |
This comment has been minimized.
This comment has been minimized.
Thanks for the incredible work you all are doing!
|
aturon
referenced this pull request
Apr 24, 2018
Closed
RFC: add futures and task system to libcore #2418
This comment has been minimized.
This comment has been minimized.
|
I've opened a new RFC based on the more conservative proposal we've been discussing, and will be closing this one out. Thanks all for the great discussion so far! I'm happy with where this is heading -- as I argue in the new stabilization section, I believe this updated RFC gives us a path to shipping async/await that is very low-risk. @HadrienG2, I've also incorporated most of your feedback in this new RFC. See you all on the new thread! |
aturon
closed this
Apr 24, 2018
This comment has been minimized.
This comment has been minimized.
Kixunil
commented
Apr 25, 2018
|
I like how this is not being rushed. Better make it as good as possible. :) |
This comment has been minimized.
This comment has been minimized.
yasammez
commented
Apr 25, 2018
|
This sounds really good. I only have the question, how in the meantime I as a user should decide which trait to use: Future or Async and why? If I followed the discussion correctly then it depends if I am !Unpin. Future is the boxed version which always works while Async is the better performing version which requires pinning, yes? For a transition period this seems alright as long as this gets cleaned up before stabilization or is very well documented. Apart from this - albeit very minor - concern I am all for this proposal and look forward to your next RFC (and maybe Blog post; I am greedy). Thanks for the great effort! |
aturon commentedApr 6, 2018
•
edited
This RFC proposes to add futures to libcore, in order to support the first-class
async/awaitsyntax proposed in a companion RFC. To start with, we add the smallest fragment of the futures-rs library required, but we anticipate follow-up RFCs ultimately bringing most of the library into libcore (to provide a complete complement of APIs).The proposed APIs are based on the futures crate, but with two major changes:
Errortype (and adjusting combinators accordingly), in favor of just usingItem = Result<T, E>instead. The RFC includes an extension trait to provide conveniences forResult-producing futures as well.Rendered