Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upExplicit `task::Context` argument to `poll` #2
Conversation
This comment has been minimized.
This comment has been minimized.
|
The discussion about this in #1 mostly centered on the following sentiment:
I personally do agree with this sentiment, so to me the key question is if a I think it's a really tough call, and I've long basically been on the fence on this one. But it certainly does give me pause that many, many people in the Rust community who have learned futures have continued to ask for this argument to be explicit. |
This comment has been minimized.
This comment has been minimized.
carllerche
commented
Feb 1, 2018
|
@aturon The majority of the community feedback in the original thread is the opposite. The lack of an explicit argument has not been the primary blocker to learnability. This is also feedback from people who spend time actively teaching futures. The simple fact is addressing the learnability with docs has not been tried. Changing the API at this point to address learnability is premature given the state of documentation. I think this comment provides valuable insight into learnability. Also, the community feedback from those who spend a lot of time implementing My personal opinion is that adding an explicit task argument is going to make the experience of working with futures noticeably worse to solve a problem that could be handled with a couple paragraphs of docs + a few examples. As @djc said:
|
This comment has been minimized.
This comment has been minimized.
|
As I said in my comment, I don't think the learnability argument by itself suffices to motivate this change.
There's been a mix of sentiment, including on the very long-running thread on this issue. All I'm saying is that, as you know well, it's something that people have been asking for for quite a long time, and that gives me some pause. Also, last time we spoke about this, you were very much on the fence. I'm curious what's changed your opinion? |
This comment has been minimized.
This comment has been minimized.
skade
commented
Feb 1, 2018
|
The summary uses the word "learnability", but then the RFC never uses it again. In the main paragraphs, it solely uses discoverability, which is clearly different. Discoverability can also be improved by mentioning the existence of the task in the documentation (possibly in the method docs). The technical arguments for this are very underdeveloped in this RFC, with a mere 2 sentences. I'd find it more convincing if I found a sketched |
This comment has been minimized.
This comment has been minimized.
carllerche
commented
Feb 1, 2018
|
@aturon mostly that I thought I was crazy and I was the only one who thought the thread-local strategy was a good idea. So, like I said, I wasn't going to block the decision either way and let it go to RFC. The community feedback tips me back in the "thread-local" camp, but all I'm doing is putting my thoughts on the record and let what happens happen. |
This comment has been minimized.
This comment has been minimized.
|
@skade Yep, I agree that this RFC will need to make the non-learnability motivations much stronger to be a solid sell. |
This comment has been minimized.
This comment has been minimized.
carllerche
commented
Feb 5, 2018
|
Could the expected integration of futures w/ For example, would crates like How is the task context accessed from It seems to me that if writing poll fns manually goes away, then the ergonomic downside is significantly reduced, but I have no idea if that would happen or not. |
carllerche
reviewed
Feb 5, 2018
| tools to the task system. Deadlocks and lost wakeups are some of the trickiest things | ||
| to track down with futures code today, and it's possible that by using a "two stage" | ||
| system like the one proposed in this RFC, we will have more room for adding debugging | ||
| hooks to track the precise points at which queing for wakeup occurred, and so on. |
This comment has been minimized.
This comment has been minimized.
carllerche
Feb 5, 2018
I don't see how this is true. Making the task argument explicit has no functionality change. You can hook into the task system to add debugging logic today. It is just that nobody has actually done it.
carllerche
reviewed
Feb 5, 2018
|
|
||
| The implicit `Task` argument makes it difficult to tell which functions | ||
| aside from `poll` will use the `Task` handle to schedule a wakeup. That has a | ||
| few implications. |
This comment has been minimized.
This comment has been minimized.
carllerche
Feb 5, 2018
You can tell which functions will use the Task handle because they return Async.
Today, you can look at any function in the h2 library and know if it requires a task context or not.
This comment has been minimized.
This comment has been minimized.
aturon
Feb 5, 2018
Contributor
You can't tell that easily from the call site, which is the key difference in code reasoning.
This comment has been minimized.
This comment has been minimized.
cramertj
Feb 5, 2018
Author
Collaborator
There are also functions like need_read which don't return Async but do rely on an implicit Task.
This comment has been minimized.
This comment has been minimized.
seanmonstar
Feb 5, 2018
•
You can't tell that easily from the call site, which is the key difference in code reasoning.
But you can, it's determined by the return value. Unless you mean when people rely on type inference for the return value and don't actually check that it's Async?
This comment has been minimized.
This comment has been minimized.
vitalyd
Feb 5, 2018
I believe I've seen code use Async as a "general purpose" return value to indicate readiness, without necessarily touching a Task internally. Perhaps this practice should be discouraged if the "Async as a return value means we touched a Task" convention is to take hold.
This comment has been minimized.
This comment has been minimized.
carllerche
reviewed
Feb 5, 2018
|
|
||
| First, it's easy to accidentally call a function that will attempt to access | ||
| the task outside of a task context. Doing so will result in a panic, but it | ||
| would be better to detect this mistake statically. |
This comment has been minimized.
This comment has been minimized.
carllerche
Feb 5, 2018
This is the only real advantage as far as I can tell. If &mut Context is explicit, you actually can't call it.
However, today with the Async return value, if you accidentally try to call a function and use the return value, you will statically have to interact w/ the Async value, which will let you know the function requires a task context.
If you do something like let _ = my_async_fn().unwrap(), then you will not get compiler warnings.
carllerche
reviewed
Feb 5, 2018
| Second, and relatedly, it is hard to audit a piece of code for which calls | ||
| might involve scheduling a wakeup--something that is critical to get right | ||
| in order to avoid "lost wakeups", which are far harder to debug than an | ||
| explicit panic. |
This comment has been minimized.
This comment has been minimized.
carllerche
Feb 5, 2018
As mentioned above, you can do this by looking at the Async return value, though this does take a bit more work than searching for a variable.
carllerche
reviewed
Feb 5, 2018
|
|
||
| For combinator implementations, this hit is quite minor. | ||
|
|
||
| For larger custom future implementations, it remains possible to use TLS internally to recover the existing ergonomics, if desired. |
This comment has been minimized.
This comment has been minimized.
carllerche
Feb 5, 2018
IMO suggesting that people do something non-standard is a weak argument. If the non-standard strategy is better, then why is it not the standard strategy?
Sticking w/ the idioms is the best option.
carllerche
reviewed
Feb 5, 2018
| For larger custom future implementations, it remains possible to use TLS internally to recover the existing ergonomics, if desired. | ||
|
|
||
| Finally, and perhaps most importantly, the explicit argument is never seen by users who only use | ||
| futures via combinators or async-await syntax, so the ergonomic impact to such users in minimal. |
This comment has been minimized.
This comment has been minimized.
carllerche
Feb 5, 2018
If the explicit argument is never seen by users who ony use futures via combinators or async-await syntax, then why not make the ergonomics solid for those of us who do?
Really, it seems that this proposal hinges on:
- How often do users transition from combinators only -> writing
Asyncfunctions? - Is it worth making that initial transition smoother at the cost of making the experience of writing
Asyncfunctions worse? - How much smoother will that transition actually be with an explicit task argument?
This comment has been minimized.
This comment has been minimized.
skade
Feb 5, 2018
•
In my experience, there's a point (and it comes pretty quickly) where you want to implement a future on you own. I'd not push that towards being an edge-case.
Then again, these are mostly futures that proxy the calls through to child futures. The biggest danger there, though, is forgetting to make sure there's always a future polled, which leaves the future group in waiting for poll. The task argument doesn't fix that.
This comment has been minimized.
This comment has been minimized.
carllerche
Feb 5, 2018
The biggest danger there, though, is forgetting to make sure there's always a future polled, which leaves the future group in waiting for poll. The task argument doesn't fix.
That is a good point. The explicit task argument doesn't actually statically prevent you from making the mistake that this RFC is attempting to protect you from.
This comment has been minimized.
This comment has been minimized.
Marwes
Feb 5, 2018
I'd question the notion that the TLS solution is more ergonomic. While it does reduce the amount of characters need to write a poll implementation that is at best a weak argument in favor of it. On the other hand I'd argue that keeping things explicit improves "ergonomics" just by virtue of being explicit.
This comment has been minimized.
This comment has been minimized.
seanmonstar
Feb 5, 2018
I'd question the notion that the TLS solution is more ergonomic.
Having written a lot of custom impl Future, it definitely is more ergonomic.
This comment has been minimized.
This comment has been minimized.
Marwes
Feb 5, 2018
Having written a lot of custom impl Future, it definitely is more ergonomic.
I assume there is something more than not needing to pass a context that makes this more ergonomic? If so, what is it?
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
P-E-Meunier
commented
Feb 5, 2018
As someone who has implemented a number of Futures (in crates like Thrussh and Pleingres, and other unreleased projects), I'd actually also like to question the learnability argument. I'm not saying futures are easy to learn, it did take me a while to figure out how to implement I believe the main thing I had to learn was that, and I fail to see how an extra argument to all functions would make it more explicit. I must admit I still don't really know how futures get registered for wakeup, but my current mental model seems to work well enough, and I don't believe the extra |
aturon
referenced this pull request
Feb 5, 2018
Closed
Consider passing the Task that is driving the future to Future::poll and renaming the function #129
This comment has been minimized.
This comment has been minimized.
Marwes
commented
Feb 5, 2018
|
I feel the whole learnability argument has become somewhat of a red herring (though I do agree with it, as I did not get where wakeups were registered at first and an explicit context would help with that). Moving on. I'd really like some input from people that want to use futures in no_std environments. If removing TLS makes it possible (or perhaps just makes it easier) to use use futures in such environments that is an indisputable advantage of the explicit solution. Perhaps we may deem those use cases not important enough but that should be carefully weighed against the advantages of TLS. |
This comment has been minimized.
This comment has been minimized.
llogiq
commented
Feb 5, 2018
|
I think it is good to explore the design space here, however caution is of course always advisable. I find some precedent in keeping state off the call stack in the regex crate, which if memory serves uses some thread local storage to make regexes thread-safe to use (whether or not this feature is widely used I won't argue here). In this case, the explicit version was considered too unwieldy, though it would presumably have had a tiny performance benefit. Of course, futures should be minimal cost even with fine-grained tasks, so what is acceptable for regex may not be acceptable here. Perhaps @withoutboats recent design trick for Generators could be applied here, too. Make an explicit version for low-level stuff and build wrapper types that bridge them to the implicit higher level trait? |
This comment has been minimized.
This comment has been minimized.
carllerche
commented
Feb 5, 2018
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
matklad
commented
Feb 5, 2018
|
I am a casual reader of the RFC, however I have a question which other casual readers might have as well :) The RFC shows changes in the But what happens to the (majority) of the code that just uses futures? Should I thread this |
This comment has been minimized.
This comment has been minimized.
|
@matklad If you're just using combinators, nothing should change. |
This comment has been minimized.
This comment has been minimized.
matklad
commented
Feb 5, 2018
|
@cramertj hm, I am still puzzled by
I think it means that something changes for the users of futures? It probably would be helpful to see a code example along the lines of "here an echo service in async hyper before and after". Or is there literally no change unless you |
This comment has been minimized.
This comment has been minimized.
|
@matklad There should be literally no change. The sentence you quoted refers to functions like |
This comment has been minimized.
This comment has been minimized.
skade
commented
Feb 5, 2018
|
Just thinking out loud without quite knowing where to put it: I am wondering how much confusion around Poll is only of interest when implementing the trait and for executors. |
This comment has been minimized.
This comment has been minimized.
I guess this depends on what you consider "user" code. Everyone I know working with futures currently has to implement futures themselves for one reason or another. I think it's good to prioritize explaining |
This comment has been minimized.
This comment has been minimized.
skade
commented
Feb 5, 2018
|
I should clarify that: outside of an executor context. I can see an explicit task clarifying that situation. Example: fn main() {
let f = // .... some future
f.poll() // -> panic
}as opposed to fn main() {
let f = // .... some future
f.poll(/* now, where do I get that task from? */)
}Then again, I wonder how much of an issue that is. People might or might not hit that, reach for an example and go on from there. |
This comment has been minimized.
This comment has been minimized.
khuey
commented
Feb 5, 2018
|
I've written a fair amount of futures code over the last two years. I didn't find the current model particularly difficult to learn, though I already had a large amount of experience with event loop and multithreaded programming from my days working on Firefox. I think making the Task argument to poll explicit is probably a good idea. I'm slightly less convinced that its worth the disruption to the futures ecosystem at this point, though I expect that's likely to be manageable. In the past I've found myself writing a lot of futures chains that boil down to something like "if we have the data already return it, otherwise do a computation asynchronously and return it when done". If you attempt to poll that chain outside of a task, the "have the data already" case generally doesn't needs to wait and will complete just fine. But when we fall into the asynchronous computation branch we'll panic since we cannot wait without a task. Someone above said that the hardest problems to debug are usually those where a future's poll impl fails to poll another future or ensure that the task is waiting for something to wake up. I tend to agree with that. It would be nice to check that statically but that seems difficult at best (since a future could poll 10 child futures and as long as any one of them waits or polls something that does it's ok). I am curious what the intended use case for task-local storage is. I've never found myself wanting that. Typically I either have an object that I want to share across futures but it cannot be used concurrently, in which case it goes into TLS, or I have an object that I don't want to share across futures. Perhaps there are other execution models where the task is more meaningful but in futures-cpupool or tokio the task you're running on seems largely arbitrary. |
This comment has been minimized.
This comment has been minimized.
carllerche
commented
Feb 5, 2018
I am currently using it to build a tracing system. The request context can be stored in a task-local and libs can use that when adding tracing data. This is similar to how tracing is done in finagle. |
This comment has been minimized.
This comment has been minimized.
yazaddaruvala
commented
Feb 5, 2018
|
One thing I've had issues with is knowing when to use the wake API. Just exposing the context is not as helpful as using the type system to help educate me. That said, exposing the context is the only way to better utilize the type system. @cramertj Would it be possible to not just expose context, but use the type system to help educate users about when they need to use the task apis (similar to how, must_use on Result educates users about error handling)? |
This comment has been minimized.
This comment has been minimized.
seanmonstar
commented
Feb 9, 2018
|
Yes, I think writing a slightly longer import once is better than the cost of every time I come back to the file, or show up to a new one, and wonder where something came from, and then look at the top and see a bunch of glob imports, and then just nope back out of the file. |
This comment has been minimized.
This comment has been minimized.
|
At least in Java-land, |
This comment has been minimized.
This comment has been minimized.
Yeah, that's a key point we're missing that makes a significant difference in ergonomics. It'd also have to be pretty smart to know to implement a trait for the extension methods it adds (although Edit: I should clarify that I don't like |
This comment has been minimized.
This comment has been minimized.
Kixunil
commented
Feb 10, 2018
|
@cramertj why not pass around trait object itself, then? I was also thinking whether dynamic dispatch would hinder performance, but I guess it's mostly used for slow path, so that should be fine. Speaking about IDEs, maybe that's the answer to our ergonomics problem regarding passing context around: create some standardized marker for IDEs to know that they should auto-fill that argument. Something like this: trait Future {
// type ...
fn poll(&mut self, #[auto_pass] context) -> Poll<Self::Item, Self::Error>;
}the IDE would detect all instances where the trait is used and show it like this: fn poll(&mut self, _) -> Poll<Self::Item, Self::Error> {
self.foo.poll(_)
} |
This comment has been minimized.
This comment has been minimized.
matklad
commented
Feb 10, 2018
At least from my IDE experience, that's a too complex feature for IDEs to make it work reliably and not get in the way :) That is, it is too complex from the "how do we expose this to the user" side, not from "that's to hard to implement" side. |
This comment has been minimized.
This comment has been minimized.
Kixunil
commented
Feb 10, 2018
|
@matklad the same way IDEs can collapse blocks of code, they could collapse variables. It's just horizontal, instead of vertical. :) IDEs should be able to know about the traits thanks to RLS. |
This comment has been minimized.
This comment has been minimized.
carllerche
commented
Feb 10, 2018
|
@cramertj would it be possible to extract the task local proposal to a separate RFC? I think there is more to discuss there but it has been lost in the noise. |
This comment has been minimized.
This comment has been minimized.
illustrious-you
commented
Feb 10, 2018
•
Speaking as someone new to Rust, I'd find this inconsistency startling in practice. Operations that manipulate the pointer should be distinct from operations that act through the pointer. In C++, for example, the methods that act on smart pointers use structure references ( That this is enforced by convention, as opposed to design, does give you the flexibility to implement a competing pattern. Without a clear and compelling reason to break from the convention, however, I'd be wary of doing so. I haven't seen such a reason. To the contrary, your responses concede @glaebhoerl's argument, indicating the decision to break with convention is arbitrary. Have I missed something? |
This was referenced Feb 10, 2018
This comment has been minimized.
This comment has been minimized.
seanmonstar
commented
Feb 12, 2018
I can say that personally, it's because I don't particularly agree with the convention. For instance, if I wrap something in an Now, some of the other part of the convention, like In this case, the smart pointer is new, and so doesn't have to worry about breaking existing code. And I'm convinced that |
This comment has been minimized.
This comment has been minimized.
carllerche
commented
Feb 12, 2018
•
|
My biggest hesitation of fn foo(self: WithContext<Self>) {
self.map(|s| &self.inner).foo_inner()
}which seems like more effort than just passing an argument around. |
cramertj
and others
added some commits
Feb 1, 2018
This comment has been minimized.
This comment has been minimized.
seanmonstar
commented
Feb 12, 2018
|
Yes, you would need to do some sort of mapping to call methods of fields. I mean, in the end, this is just a compromise since others really wanted explicit arguments and I didn't want to die on this hill. It does have benefits when calling multiple methods of |
cramertj
force-pushed the
cramertj:task-context
branch
from
63455c6
to
45c2da6
Feb 12, 2018
cramertj
merged commit 2f457fa
into
rust-lang-nursery:master
Feb 12, 2018
This comment has been minimized.
This comment has been minimized.
|
Huzzah! This RFC has been merged. We'll start off by introducing @carllerche I've added an unresolved question about the task-local accessors and opened rust-lang-nursery/futures-rs#753 to follow-up. |
aturon
referenced this pull request
Feb 12, 2018
Closed
Proposal: Convention for which types must be used on a Task #250
aturon
added
the
0.2
label
Feb 28, 2018
termhn
referenced this pull request
Mar 9, 2018
Closed
Explore using arbitrary_self_types & 'WithContext' pattern from futures 0.2 rfc #298
This comment has been minimized.
This comment has been minimized.
pythonesque
commented
Apr 7, 2018
Sorry to answer your question so late, but: no. TLS (in Rust as it is currently implemented) isn't generally panic safe and can't be detected as such. It's an unfortunate tradeoff, but it's one we made primarily because people don't use TLS in Rust very much... which, not to beat a dead horse, is one of the reasons I'm glad this RFC was accepted :) |
cramertj commentedFeb 1, 2018
•
edited by aturon
Learnability, reasoning, debuggability and
no_std-compatibility improvements achieved by making task wakeup handles into explicit arguments.Rendered