Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upInferred `await!` #3
Comments
This comment has been minimized.
This comment has been minimized.
Thomasdezeeuw
commented
Jun 28, 2017
|
I personally prefer an explicit keyword or macro over "magic". In the first example you can skim it quickly and see that it has two places where it may wait on blocking I/O. While the in the second example I first have to check all the APIs of the hyper client to check if a method returns a future. For example I might wrongly assume that the complete body is ready once |
This comment has been minimized.
This comment has been minimized.
|
Perhaps! I don't know how this would be implemented, but would be worth a shot! |
This comment has been minimized.
This comment has been minimized.
jtremback
commented
Aug 7, 2017
|
I'm curious about what the benefit would be here. |
This comment has been minimized.
This comment has been minimized.
Nercury
commented
Aug 8, 2017
•
|
In C#, sometimes you would not use await, if it is necessary to juggle the returned tasks (put them in a list, await several at once, await elsewhere, etc). I imagine the same use cases are applicable here too. Inferring await means awaiting at the first sight of any future in a method marked as #[async_await], which takes away some control. However, #[async_await] still sounds interesting, and looks like it could live alongside #[async]. |
This comment has been minimized.
This comment has been minimized.
@Nercury your example is exactly why @jtremback The reason I prefer inferring |
This comment has been minimized.
This comment has been minimized.
|
All let response = await!(make_request())?;I saw this briefly discussed in the eRFC. I think if we have let response = make_request()?;If you wanted to "build up" a bunch of futures and wait for them concurrently, you'd do so normally without suffixing them with let requests = vec![make_request(), make_request(), make_request()];
let responses = future::join_all(requests)?; // implicit await hereAs for implementation, I don't see how until the compiler knows about generators and the |
This comment has been minimized.
This comment has been minimized.
rushmorem
commented
Aug 30, 2017
Maybe my comment on Reddit might be useful here:-
match await!(...) {
Ok(result) => { /* do something with the result */ }
Err(error) => { /* handle the error */ }
}Both styles can coexist within the same |
This comment has been minimized.
This comment has been minimized.
jtremback
commented
Aug 31, 2017
•
|
@yazaddaruvala I'm not very experienced in Rust, and I will admit that I have not even used this library! But I was working on a piece of javascript where I had forgotten to So it seems to me that perhaps another syntax choice would be to Otherwise, I could see myself using Seems cleaner to make that annotation right where it's important, instead of at the top of the containing function. It also might make it easier for people new to cooperative multitasking, or programming in general to learn Rust. You would not have to worry about futures, until you needed to drop down in a level of abstraction to do something more advanced. With conventional |
This comment has been minimized.
This comment has been minimized.
Thomasdezeeuw
commented
Aug 31, 2017
|
@jtremback I respectfully disagree with your opinion that an automatic await would make the code easier. I think it would make it a lot harder to read, specifically you're no longer sure what a function does or returns. If I see an explicit #[async]
fn some_function() -> Result<(), ()> {
let value1 = await some_other_function()?; // An await function.
let value2 = do_something(value1)?; // Regular function.
let value3 = await a_third_function(value2)?; // An await function.
Ok(value3);
}
#[async]
/// Example without explicit await.
fn some_function2() -> Result<(), ()> {
let value1 = some_other_function()?; // Maybe a regular function, maybe an await function...
let value2 = do_something(value1)?; // Maybe a regular function, maybe an await function...
let value3 = a_third_function(value2)?; // Maybe a regular function, maybe an await function...
Ok(value3);
}A somewhat similar debate is going on in RFC 2111, where the idea is to automatically deference Copy traits. In that debate that people argue that with the current requirement of adding an explicit reference the code is a lot easier to read. And I have to agree. Furthermore asynchronous or concurrent code is hard. Making it seem easy, by abstracting things away that people just need to know about when programming something that works in parallell, will come to bite them later. |
This comment has been minimized.
This comment has been minimized.
rpjohnst
commented
Aug 31, 2017
•
|
If awaiting is going to be the default, it needs to be the default everywhere, not piggybacked on One potential further benefit of this swapped default would be generic code. A function could be instantiated as either blocking or async, without changing its body. This would mostly be useful for utility/combinator-style functions, which often take closures (which would instead be generators in the async case). Confusing |
This comment has been minimized.
This comment has been minimized.
|
Both synchronous reads and asynchronous reads with #[async]
fn foo() -> Result<_, _> {
let value1 = sync_read()?;
let value2 = await!(async_read())?;
}Let's take a moment and read that function again. There is a significantly more dangerous problem going on that is currently impossible to prevent or lint against. Synchronous I/O in an Asynchronous function blocks the entire event loop. In Node.js we are able to lint against this because of a convention that all sync methods end in Yes, this includes even your This should probably be its own issue though. |
This comment has been minimized.
This comment has been minimized.
jtremback
commented
Aug 31, 2017
•
|
@Thomasdezeeuw I was also skeptical earlier in the thread. I was just observing that in my experience of a year or more of using async/await in JS, it is far, far more common to
Also, I'm wondering what kinds of errors you feel the explicit |
This comment has been minimized.
This comment has been minimized.
Thomasdezeeuw
commented
Sep 1, 2017
|
@jtremback It's mainly being explicit about what you're doing. Calling an // Obviously an `AtomicUsize` would be better, but for the sake of an example...
let some_counter = Mutex<usize>;
fn increase_counter() -> Result<usize, ()> {
// We're locking the mutex here.
let counter = await some_counter.lock();
// What happens here if `some_other_function` would block?
// 1. Do we unlock the mutex, and try again?
// 2. Do we keep the lock and wait until `some_other_function` is ready?
let some_other_value = await some_other_function();
// Without the `await` keyword/macro it's not clear that the function might
// block and it could be overlooked. Which would cause the lock to be held
// for a long time, but it's not obvious. Which could lead to a performance problem.
*counter += 1;
Ok(*counter)
}Javascript hides a lot of the performance costs, but Rust isn't Javascript. I know the code might "look better", or be easier to write, without the keyword/macro call, but it won't be clear enough. I see |
This comment has been minimized.
This comment has been minimized.
rushmorem
commented
Sep 1, 2017
•
You are assuming that all regular functions that will be called in an I totally understand your argument and I agree that those My point is, there are trade-offs with both approaches. There are even practical advantages and disadvantages with both approaches. You (and others who argue for not "inferring" await!) have correctly stated that it potentially hides performance costs. As for me, I'm all for implementing @mehcode's suggestion because, IMHO, it improves ergonomics of Let's not forget that the whole reason of introducing |
This comment has been minimized.
This comment has been minimized.
Thomasdezeeuw
commented
Sep 2, 2017
|
@rushmorem I agree that |
rushmorem
referenced this issue
Sep 2, 2017
Open
Consider writing return types from the perspective of the user instead of the implementor #15
This comment has been minimized.
This comment has been minimized.
jimmycuadra
commented
Sep 3, 2017
|
I don't understand the suggestion that inferring |
This comment has been minimized.
This comment has been minimized.
dwrensha
commented
Sep 3, 2017
I disagree, and not because of performance costs. In my opinion, the point of async/await is to make async code more convenient while keeping explicit the points where control can be yielded to another task. If you don't care about yield points being explicit, then why not settle for stackful coroutines? Or for plain old threads? (I'm not convinced that "performance" is a persuasive answer.) As I see it, the main reason to use asynchronous code is so that you can have concurrent tasks that share data without needing to synchronize. If you want to make a nontrivial mutation to that shared data (e.g. say the shared data contains two tables that need to be keep consistent with each other), then you need to make sure that no other task can concurrently access the data while the mutation is being made. If you are using stackful coroutines or threads, this probably means grabbing a mutex. If you are using async/await, you know statically that as along as you don't |
rushmorem
referenced this issue
Sep 5, 2017
Closed
RFC: Let's make await a method and possibly deprecate wait #20
This comment has been minimized.
This comment has been minimized.
|
Wow, I wasn't expected such an opinionated debate. I've been on vacation and haven't had time to digest any of this just yet, but I've summarized the thread below. Hopefully thats helpful. I'll add some more thoughts when I've been able to formulate them. Currently, it seems this type of inference is not possible. I'm not going to focus on this, and maybe we can see if we would even want such a feature. Reasons Not To:
Reasons To:
Other Options:
|
This comment has been minimized.
This comment has been minimized.
rpjohnst
commented
Sep 15, 2017
|
This thread has some more discussion, based on Kotlin's flavor of async/await: https://www.reddit.com/r/rust/comments/6zy8hl/kotlins_coroutines_and_a_comparison_with_rusts/ Treating async like
|
This comment has been minimized.
This comment has been minimized.
|
Sorry, this is kinda tangential, but a lot of people in this thread have an issue with either calling blocking APIs in an async context, or blocking a future in an async context. I'm curious if there is a way to allow creators of APIs to inform API users of such instances. Then API creators could mark functions as @alexcrichton given your proximity to the compiler and async development, what are your informal thoughts on such a language feature: A I'm not suggesting changing the std library (i.e. mark all blocking calls), that would be backwards incompatible but at least allowing crates like Finally, It would be ideal if eg.
P.S. It might even be worth making the P.S. I'm not tied to the word "blocking" just the idea of such a tag seems interesting. |
This comment has been minimized.
This comment has been minimized.
|
@yazaddaruvala I think that's effectively an effect system, which I don't think will work out well. I think |
This comment has been minimized.
This comment has been minimized.
|
I love that they have a name for everything! But I can see what you mean about it overwhelming many code bases, which would be a poor experience. |
This comment has been minimized.
This comment has been minimized.
main--
commented
Sep 22, 2017
|
@yazaddaruvala @alexcrichton I was trying to design a similar lint when I was working on an async/await RFC several years ago. I ultimately abandoned the idea when I realized that there is no clearcut distinction at all: When a function is neither async nor blocking, it's supposed to be "basically instant", taking ~0 time to execute. But chain up enough of those "0-cost" calls and what you get is very definitely blocking behavior. |
This comment has been minimized.
This comment has been minimized.
|
There is a clear cut distinction though. If you do IO in any form that is not deferred to an event loop, that is a blocking call and thus your function is a blocking function. It becomes the halting problem to try and detect this with a linter. We could easily do this with an implicit and viral effect system. The issue with is it brings more complications that I don't fully understand. I think this is the way forward though. |
This comment has been minimized.
This comment has been minimized.
rpjohnst
commented
Sep 22, 2017
|
This on its own doesn't help with blocking from within an async function, either via IO or just heavy computation. However, lints can pretty easily detect the most common blocking calls, and something like the recent Tokio reform RFC that decouples the event loop thread from the actual future execution can help limit the impact. Kotlin's approach also benefits here from taking an execution context parameter at all places that initiate async execution. Because you can't just call an async function from a non-async one, and nested async calls inherit the same "executor" (to use |
This comment has been minimized.
This comment has been minimized.
jtremback
commented
Sep 22, 2017
|
@main-- The distinction is between code that is taking your thread a long time to complete, vs code that is taking something else a long time to complete. Making this distinction allows your thread to do more useful work while waiting, and is the reason we're all here. |
This comment has been minimized.
This comment has been minimized.
main--
commented
Sep 22, 2017
|
@mehcode @jtremback Agreed, there is a distinction - just not a useful one: The whole point of linting against blocking calls in async functions is that you don't want to block the entire application (or more specifically: the event loop). From this perspective, there is no semantic difference between waiting one second for a file read or calculating prime numbers for one second. The obvious correct solution for I/O is of course async operations. I'm also aware of the reform RFC's thread pool and of course it helps with heavy computation, but if your entire thread pool is tied up with a small number of very expensive computations, this still hangs the entire application. And that's a bug. To correctly solve this, you should not let your thread pool grow indefinitely - that merely masks application bugs with one more layer that's sure to break down under heavy load. Instead, what you need is dedicated worker threads so you can control how much pressure the computation puts onto your system. |
This comment has been minimized.
This comment has been minimized.
|
@main-- you're right and you're wrong. In both cases the event loop and my service is not able to handle the load. However:
Async IO doesn't mean infinite scale. Async IO just ensures your hardware is working effectively. |
This comment has been minimized.
This comment has been minimized.
llambda
commented
Oct 19, 2017
•
|
I don't think implied await is always the right thing; you don't necessarily always want to await. And now to bikeshed a bit here...what the @ (at symbol) might look like, as a unary suffix operator for awaiting, (similar in design to the ? operator): #[async]
fn fetch_rust_lang(client: hyper::Client) -> io::Result<String> {
let response = client.get("https://www.rust-lang.org")@?;
if !response.status().is_success() {
return Err(io::Error::new(io::ErrorKind::Other, "request failed"))
}
let body = response.body().concat()@?;
let string = String::from_utf8(body)?;
Ok(string)
}People were resistant to the ? operator at first, but it became common style...maybe await will be similar; maybe not. The rationale is similar to the ? unary suffix operator operator. Consider the following contrived JS: (await (await fetch('/rest/api', {credentials:'include'})).json())[0]vs await as a @ unary suffix operator: fetch('/rest/api', {credentials:'include'})@.json()@[0] |
This comment has been minimized.
This comment has been minimized.
Thomasdezeeuw
commented
Oct 19, 2017
|
@grant Minor can we not do that? It would mean people have to another symbol in an already rather complex syntax that is Rust today. Surely typing `await ` vs. `@` is not that big of a deal? To me `await` is far clearer then `@`, just my opinion.
…--
Yours sincerely,
Thomas de Zeeuw
https://thomasdezeeuw.nl
thomasdezeeuw@gmail.com
On 19 Oct 2017, at 20:43, Grant Miner ***@***.***> wrote:
I don't think implied await is the right thing; you don't necessarily always want to await.
Here's what the @ (at symbol) might look like, as an await operator to parallel the ? operator:
#[async]
fn fetch_rust_lang(client: hyper::Client) -> io::Result<String
> {
let response = client.get("https://www.rust-lang.org"
)@?;
if !response.status().is_success
() {
return Err(io::Error::new(io::ErrorKind::Other, "request failed"
))
}
let body = response.body().concat
()@?;
let string = String::from_utf8
(body)?;
Ok
(string)
}
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
This comment has been minimized.
This comment has been minimized.
parasyte
commented
Nov 22, 2017
|
Just my two cents, having developed async applications with gevent for about 5 years; Write async Python that looks exactly like sync Python. Under the hood it creates a default event loop and creates implicit yield points by monkey-patching the standard library in all of the obvious places; Speaking from my personal experience with gevent, I haven't suffered from the implied yield points. If anything, it has saved me from thinking about code paths in terms of async units, "this function needs to be called async, so I will annotate the call." There are few places where you actually care about whether a function call yields to the event loop. In some rare circumstances, I have actually done the opposite, inserting explicit yield points with Now, I do see the value of an explicit The "async code that looks exactly like sync code" thing is a bit of a double-edged sword, though. Especially for beginners. Forgetting to monkey-patch a library with gevent (for example) is a sure way to end up with blocking I/O and you won't know about it until you test for it or run into problems with an awfully slow application. I'm not really arguing for one way or another, just providing some insights from my own experience and trying to rationalize the feature request here. |
yazaddaruvala commentedJun 28, 2017
What are your thoughts on a version of this syntax where
await!is inferred?Example: