-
Notifications
You must be signed in to change notification settings - Fork 605
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stack overflow when using streams #62
Comments
Some discussion recently has led us to the conclusion that we're likely to remove extern crate futures;
use futures::{Future, BoxFuture};
use futures::stream::{iter, channel, Stream, Sender};
fn count_down(i: u32, f: Sender<u32, ()>) -> BoxFuture<(), ()> {
iter((0..i).map(Ok)).fold(f, |f, i| {
f.send(Ok(i)).map_err(|_| ())
}).map(|_| ()).boxed()
}
fn main() {
let (tx, rx) = channel::<u32, ()>();
rx.for_each(move |v| {
Ok(())
}).forget();
count_down(100_000, tx).forget();
} |
A loop would work fine in this case and I think these constructs should be provided. My guess is that something like clojure's trampoline (https://clojuredocs.org/clojure.core/trampoline) would enable any pattern that doesn't fit with an existing looping combinator. |
Ok, I'm gonna close this in favor of #78 where can catalog and track a number of loop combinators |
This might be worth documenting somewhere. I'm considering using this library and stumbled on this while looking for other stuff. The problem is that I can't actually tell the difference between the examples here. Why does one work and not the other? I imagine that this issue will eventually just disappear, but until then it might be worth a note in the tutorial that explains what's going on. |
@camlorn good point! I'll add a point to the FAQ at least for now. The crucial difference here is how the "loop" of the computation is being expressed. The first example uses recursion, while the second example uses a stream. All streams work in constant stack space, whereas recursion runs the risk of blowing the stack. |
So, is it safe as long as the recursion goes through a future that isn't immediately ready, or is tail recursion like that always a problem? both here and the FAQ are just saying "this can happen". As someone looking in from the outside and without a full understanding of the issues surrounding implementing futures, I'd like to understand the conditions under which my code might explode. |
Recursion through a future that isn't immediately ready runs less risk of blowing the stack, but it still runs the risk of a memory leak unfortunately. With the removal of And yeah it's true that this is definitely a tricky topic. I'd be willing to help out with any questions, and if you've got suggestions on docs I'd be more than willing to beef them up! |
So, this looks like an actual problem. Not sure how big of one. I can't write code yet; I'm still looking at this crate broadly, as it were. Say I schedule a timeout with Mio. I want this timeout to repeat over and over for the duration of the application. The way I would do this is This sounds like it would leak. I know this issue is closed, and I'm not sure if my points deserve a separate issue. But what I'm seeing here is a lack of zero-cost abstraction, which I kind of thought was the point. My understanding of this at the moment is that all tailcalling state machines made with this crate use I suppose this could be fixed by providing streams for more things. I'm looking at the commit that removed tailcall, but I really think this needs to be replaced with something. It seems to me that futures which couldn't be optimized themselves could still call the optimization pass on their parents in the tree, which would at least gain something. Beyond that, I can't make a useful suggestion, not if the constraint is no implicit heap usage. If we are allowed to use the heap, though, If I'm treating this as a much bigger problem than it is, say so and I'll go away. I'll be the first to admit that my understanding is poor as of yet. |
It depends on how you write the iteration. If done via recursion, yes. If done via something like
To clarify, state machines in general don't have this problem. For example if you write a state machine by hand, you're destroying all previous state when you transition states, so there's nothing that's held around (unless you hold it around). The problem is that if you use In that sense your enum/match would still only use O(1) space. If you were to express it via standard
Our intention is to provide basic loop combinators, but do you think this may not be sufficient? |
I have an actual proposal, which I'm going to go put on #78. I'm not sure how you're supposed to represent a state machine that transitions back into itself recursively. What is the intended approach, something with fold? One of the things drawing me to this crate is that I could use it to avoid giant match arms. But if that isn't the case, a lot of the appeal goes out of it. Of course, it's possible that my specific use case can easily be rewritten into something friendly. But it seems to me that the most obvious way to do such a thing is to tailcall like in the first comment. |
I hacked in `futures-rs` looping behaviour by this form: ```rust fn f(arg: u64) -> BoxFuture<…> { future::done(Ok(())) .and_then(do_important_thing(arg)) .and_then(|returned| { if returned.loop_is_complete() { future::done(Ok(())) } else { f(arg) } }) .boxed() } ``` A non-conditional recursion like the above rapidly stack overflowed, which is unsurprising without something like tail recursion. I'm not 100% on what the stack would look like here. Need to devise a better option using: * rust-lang/futures-rs#62 * rust-lang/futures-rs#78 * https://users.rust-lang.org/t/how-to-create-a-loop-with-futures-rs/7011
Make boxed generators compatible with futures-cpupool
The issue is that streams require recursion in order to loop. The current assumptions in the futures library are that, at some point, a future / stream will not be ready, forcing a callback to be registered (essentially a defer). However, this assumption is not always true. The following example demonstrates the problem w/ immediately ready futures but the issue exists in any situation where futures become ready faster than they are consumed.
Example:
The text was updated successfully, but these errors were encountered: