New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async IO #1081

Closed
steveklabnik opened this Issue Apr 21, 2015 · 183 comments

Comments

Projects
None yet
@steveklabnik
Member

steveklabnik commented Apr 21, 2015

Rust currently only includes synchronous IO in the standard library. Should we also have async IO as well, or leave that to an external crate? What would the design look like?

Moved from rust-lang/rust#6842

Related: #388

@Valloric

This comment has been minimized.

Show comment
Hide comment
@Valloric

Valloric Apr 21, 2015

I don't see why this couldn't just remain in third-party external crates.

I don't see why this couldn't just remain in third-party external crates.

@steveklabnik

This comment has been minimized.

Show comment
Hide comment
@steveklabnik

steveklabnik Apr 21, 2015

Member

It can. Hence the 'community library' and 'libs' tags both.

Member

steveklabnik commented Apr 21, 2015

It can. Hence the 'community library' and 'libs' tags both.

@Anachron

This comment has been minimized.

Show comment
Hide comment
@Anachron

Anachron Apr 22, 2015

Depends on who is actually maintaining the crate.

In my opinion it should be at least some from the core members as once async and sync are not on the same page anymore, it can lead to confusion of even worse, broken projects.

What I mean is this:
Once the community wrote async versions of the std, the community will make decisions on itself, whether something should be one or another way.

It will be hard to keep up-to-date and collaborateurs may not have digged into the core of Rust, though it will either go into a different direction or need someone to sync it with the syncronous version of the core.

Depends on who is actually maintaining the crate.

In my opinion it should be at least some from the core members as once async and sync are not on the same page anymore, it can lead to confusion of even worse, broken projects.

What I mean is this:
Once the community wrote async versions of the std, the community will make decisions on itself, whether something should be one or another way.

It will be hard to keep up-to-date and collaborateurs may not have digged into the core of Rust, though it will either go into a different direction or need someone to sync it with the syncronous version of the core.

@josephglanville

This comment has been minimized.

Show comment
Hide comment
@josephglanville

josephglanville May 7, 2015

The problem with "leaving it up to third party crates" is not having a blessed implementation that all libraries can interoperate with.

This problem has already happened a few times now, namely Ruby and Python both have many competing and incompatible asynchronous IO libraries (gevent/twisted, celluloid/eventmachine).

In and of itself that doesn't sound so bad until you realise the huge amount of stuff that gets built on top of said libraries. When you aren't then able to use powerful libraries with each other because they belong to different "async" camps then things get sad pretty quickly.

Contrast this to C#, a language with built in async primitives, it also ships most of the higher level async integrated code (HTTP client etc). There is a single blessed solution and every library builds on top of it, they all interoperate and everyone is happy.

I think it's super important to have async IO in core or blessed in some way to avoid fragmentation.

The problem with "leaving it up to third party crates" is not having a blessed implementation that all libraries can interoperate with.

This problem has already happened a few times now, namely Ruby and Python both have many competing and incompatible asynchronous IO libraries (gevent/twisted, celluloid/eventmachine).

In and of itself that doesn't sound so bad until you realise the huge amount of stuff that gets built on top of said libraries. When you aren't then able to use powerful libraries with each other because they belong to different "async" camps then things get sad pretty quickly.

Contrast this to C#, a language with built in async primitives, it also ships most of the higher level async integrated code (HTTP client etc). There is a single blessed solution and every library builds on top of it, they all interoperate and everyone is happy.

I think it's super important to have async IO in core or blessed in some way to avoid fragmentation.

@m13253

This comment has been minimized.

Show comment
Hide comment
@m13253

m13253 May 9, 2015

I don't see why this couldn't just remain in third-party external crates.

But I think there should be language-level support for some key features that are important to async programming. That includes:

  • Coroutines (we need compiler support to ensure thread-safe)
  • Python-styled yield statement (though it originally makes an iterator, in async programming it is used to save current execution point and get back later)
  • Or C#-styled async/await statement instead
  • Green threads (it is possible for 3rd-party libraries, but providing an "official" green threading library avoids different network libraries conflicting with each other because they used different green threading libraries)

Yes we can already build a Node.JS-styled callback based async library -- that makes no sense. We need language-level support to build a modern one.

m13253 commented May 9, 2015

I don't see why this couldn't just remain in third-party external crates.

But I think there should be language-level support for some key features that are important to async programming. That includes:

  • Coroutines (we need compiler support to ensure thread-safe)
  • Python-styled yield statement (though it originally makes an iterator, in async programming it is used to save current execution point and get back later)
  • Or C#-styled async/await statement instead
  • Green threads (it is possible for 3rd-party libraries, but providing an "official" green threading library avoids different network libraries conflicting with each other because they used different green threading libraries)

Yes we can already build a Node.JS-styled callback based async library -- that makes no sense. We need language-level support to build a modern one.

@flaub

This comment has been minimized.

Show comment
Hide comment
@flaub

flaub May 14, 2015

I think it's important to distinguish between traditional async I/O from use cases requiring the standard C select() API. I think that the existing synchronous I/O API is incomplete without the ability to cancel blocking calls. This doesn't seem to be robustly implementable without using select().

I'd like to see the current synchronous I/O API extended to support cancellation without necessarily exposing the underlying select. The goal here is not to provide a highly scalable or particularly efficient way of scheduling I/O requests, but merely a way to cancel blocking calls.

flaub commented May 14, 2015

I think it's important to distinguish between traditional async I/O from use cases requiring the standard C select() API. I think that the existing synchronous I/O API is incomplete without the ability to cancel blocking calls. This doesn't seem to be robustly implementable without using select().

I'd like to see the current synchronous I/O API extended to support cancellation without necessarily exposing the underlying select. The goal here is not to provide a highly scalable or particularly efficient way of scheduling I/O requests, but merely a way to cancel blocking calls.

@gotgenes

This comment has been minimized.

Show comment
Hide comment
@gotgenes

gotgenes May 18, 2015

  • Python-styled yield statement (though it originally makes an iterator, in async programming it is used to save current execution point and get back later)
  • Or C#-styled async/await statement instead

A minor correction to the "Python-styled" comment: it's technically yield from – syntax introduced in Python 3.3. (But yes, it is still based on generators).

Regarding async/await, it's worth noting that this syntax was proposed for Python 3, and has been provisionally accepted for Python 3.5. PEP 492 lists a fair number of languages that have adopted or have proposed the async/await keywords. PEP 492 also does a fair job of describing the weaknesses of the yield from approach, as well as the advantages of async/await.

Not that bandwagons are always the best reason to choose a direction, but async/await will become a very widespread idiom, and supporting it in Rust would provide great accessibility to those of us coming from other languages.

Yes we can already build a Node.JS-styled callback based async library -- that makes no sense. We need language-level support to build a modern one.

Hear, hear!

  • Python-styled yield statement (though it originally makes an iterator, in async programming it is used to save current execution point and get back later)
  • Or C#-styled async/await statement instead

A minor correction to the "Python-styled" comment: it's technically yield from – syntax introduced in Python 3.3. (But yes, it is still based on generators).

Regarding async/await, it's worth noting that this syntax was proposed for Python 3, and has been provisionally accepted for Python 3.5. PEP 492 lists a fair number of languages that have adopted or have proposed the async/await keywords. PEP 492 also does a fair job of describing the weaknesses of the yield from approach, as well as the advantages of async/await.

Not that bandwagons are always the best reason to choose a direction, but async/await will become a very widespread idiom, and supporting it in Rust would provide great accessibility to those of us coming from other languages.

Yes we can already build a Node.JS-styled callback based async library -- that makes no sense. We need language-level support to build a modern one.

Hear, hear!

@ryanhiebert

This comment has been minimized.

Show comment
Hide comment
@ryanhiebert

ryanhiebert May 18, 2015

Being somebody familiar with Python and its async story, but not as familiar with compiled static languages, it would be helpful to me (and perhaps others) if somebody could comment on something that I'm familiar with from Python.

In Python 3.5, async and await will be based on yield and yield from coroutines based on generators under the hood. This seems like a pretty elegant design to me, but I'd love to hear if there are any problems with that kind of approach.

Being somebody familiar with Python and its async story, but not as familiar with compiled static languages, it would be helpful to me (and perhaps others) if somebody could comment on something that I'm familiar with from Python.

In Python 3.5, async and await will be based on yield and yield from coroutines based on generators under the hood. This seems like a pretty elegant design to me, but I'd love to hear if there are any problems with that kind of approach.

@gotgenes

This comment has been minimized.

Show comment
Hide comment
@gotgenes

gotgenes May 18, 2015

In Python 3.5, async and await will be based on yield and yield from coroutines based on generators under the hood. This seems like a pretty elegant design to me, but I'd love to hear if there are any problems with that kind of approach.

From @nathanaeljones's comment on rust-lang/rust#6842:

One of the earliest comments asked about .NET's async system. It's a callback system, like most, but there is a lot of syntactic sugar, and the 3 different systems that are exposed have different levels of overhead.

The most popular is C# async/await, which generates a closure and state machine to provide a continuation callback. Stack and thread context restoration is expensive, but you can opt out per-task.

Now, what is "expensive"? I'm not sure. (I'm in the same boat as @ryanhiebert. I program mostly in Python. I was made aware of Rust by @mitsuhiko's blog posts.)

In Python 3.5, async and await will be based on yield and yield from coroutines based on generators under the hood. This seems like a pretty elegant design to me, but I'd love to hear if there are any problems with that kind of approach.

From @nathanaeljones's comment on rust-lang/rust#6842:

One of the earliest comments asked about .NET's async system. It's a callback system, like most, but there is a lot of syntactic sugar, and the 3 different systems that are exposed have different levels of overhead.

The most popular is C# async/await, which generates a closure and state machine to provide a continuation callback. Stack and thread context restoration is expensive, but you can opt out per-task.

Now, what is "expensive"? I'm not sure. (I'm in the same boat as @ryanhiebert. I program mostly in Python. I was made aware of Rust by @mitsuhiko's blog posts.)

@lilith

This comment has been minimized.

Show comment
Hide comment
@lilith

lilith May 18, 2015

@gotgenes The expense depends on how much thread-local storage you're using. I know that it's low enough now that new APIs are async-only. This really depends on the language runtime (and the operating system); I don't think much can be learned about the performance implications by looking at other languages.

lilith commented May 18, 2015

@gotgenes The expense depends on how much thread-local storage you're using. I know that it's low enough now that new APIs are async-only. This really depends on the language runtime (and the operating system); I don't think much can be learned about the performance implications by looking at other languages.

@sleeparrow

This comment has been minimized.

Show comment
Hide comment
@sleeparrow

sleeparrow Jun 22, 2015

From @flaub:

I think it's important to distinguish between traditional async I/O from use cases requiring the standard C select() API. I think that the existing synchronous I/O API is incomplete without the ability to cancel blocking calls. This doesn't seem to be robustly implementable without using select().

I'd like to see the current synchronous I/O API extended to support cancellation without necessarily exposing the underlying select. The goal here is not to provide a highly scalable or particularly efficient way of scheduling I/O requests, but merely a way to cancel blocking calls.

I agree with this. Even trivial programs might suffer from hacks due to the inability to interrupt synchronous calls. (See rust-lang/rust#26446.) I think the language should support asynchronous IO, personally, but if it were really too complex to add to the API, synchronous IO should be made programmatic.

From @flaub:

I think it's important to distinguish between traditional async I/O from use cases requiring the standard C select() API. I think that the existing synchronous I/O API is incomplete without the ability to cancel blocking calls. This doesn't seem to be robustly implementable without using select().

I'd like to see the current synchronous I/O API extended to support cancellation without necessarily exposing the underlying select. The goal here is not to provide a highly scalable or particularly efficient way of scheduling I/O requests, but merely a way to cancel blocking calls.

I agree with this. Even trivial programs might suffer from hacks due to the inability to interrupt synchronous calls. (See rust-lang/rust#26446.) I think the language should support asynchronous IO, personally, but if it were really too complex to add to the API, synchronous IO should be made programmatic.

@phaux

This comment has been minimized.

Show comment
Hide comment
@phaux

phaux Jun 30, 2015

Would be cool to have something like GJ in the core.

phaux commented Jun 30, 2015

Would be cool to have something like GJ in the core.

@jimrandomh

This comment has been minimized.

Show comment
Hide comment
@jimrandomh

jimrandomh Aug 4, 2015

Please first put in the straightforward wrapper around select/pselect. I understand you also want to build something better, but my first experience with Rust was hearing that it was 1.0, trying to do a project that involved using pseudoterminals, very close to something I'd already done in C, which involves waiting for input from the user and from a subprocess simultaneously. It immediately got bogged down in a rabbit hole of reverse-engineering macros from Linux system headers and corner cases of FFI, ending up much much more difficult than it had any right to be.

Please first put in the straightforward wrapper around select/pselect. I understand you also want to build something better, but my first experience with Rust was hearing that it was 1.0, trying to do a project that involved using pseudoterminals, very close to something I'd already done in C, which involves waiting for input from the user and from a subprocess simultaneously. It immediately got bogged down in a rabbit hole of reverse-engineering macros from Linux system headers and corner cases of FFI, ending up much much more difficult than it had any right to be.

@retep998

This comment has been minimized.

Show comment
Hide comment
@retep998

retep998 Aug 24, 2015

Member

@jimrandomh Keep in mind that select on Windows only works on sockets, and you can't really wait on groups of files/terminals/pipes. If someone can create a crate with async that works well on both Windows and non-Windows, then I'm sure a lot more attention will be paid to getting async in Rust.

Member

retep998 commented Aug 24, 2015

@jimrandomh Keep in mind that select on Windows only works on sockets, and you can't really wait on groups of files/terminals/pipes. If someone can create a crate with async that works well on both Windows and non-Windows, then I'm sure a lot more attention will be paid to getting async in Rust.

@gotgenes

This comment has been minimized.

Show comment
Hide comment
@gotgenes

gotgenes Aug 24, 2015

Keep in mind that select on Windows only works on sockets, and you can't really wait on groups of files/terminals/pipes.

Lack of complete support in Windows didn't stop Python from using it.

Keep in mind that select on Windows only works on sockets, and you can't really wait on groups of files/terminals/pipes.

Lack of complete support in Windows didn't stop Python from using it.

@michallepicki

This comment has been minimized.

Show comment
Hide comment
@boazsegev

This comment has been minimized.

Show comment
Hide comment
@boazsegev

boazsegev Dec 26, 2015

👍

I doubt we could standardize an Async IO API for all Rust applications without making it part of the Core library... and Async IO seems (to me) to be super important - even if it's just a fallback to a select call (i.e. returning a Result::Err("would block") instead of blocking).

I believe that a blessed / standard Async IO API is essential in order to promote Rust as a network programming alternative to Java, C, and other network languages (even, perhaps, Python or Ruby).

Also, considering Async IO would probably benefit the programming of a Browser, this would help us keep Mozilla invested.

...

Than again, I'm new to Rust, so I might have missed an existing solution (and no, mio isn't an existing solution, it a whole-sale IO framework).

👍

I doubt we could standardize an Async IO API for all Rust applications without making it part of the Core library... and Async IO seems (to me) to be super important - even if it's just a fallback to a select call (i.e. returning a Result::Err("would block") instead of blocking).

I believe that a blessed / standard Async IO API is essential in order to promote Rust as a network programming alternative to Java, C, and other network languages (even, perhaps, Python or Ruby).

Also, considering Async IO would probably benefit the programming of a Browser, this would help us keep Mozilla invested.

...

Than again, I'm new to Rust, so I might have missed an existing solution (and no, mio isn't an existing solution, it a whole-sale IO framework).

@chpio

This comment has been minimized.

Show comment
Hide comment
@chpio

chpio Dec 27, 2015

we could standardize the interface in the core and let lib developers do the implementations. that way every one would use that one "blessed" interface but we could have multiple competing implementations (it may be a good idea, i dont know :)).

Also there could be multiple async-api-abstraction-levels just like in JS:

async-await:

async function myFunc() {
  const data = await loadStuffAsync();
  return data.a + data.b;
}

promises:

function myFunc() {
  return loadStuffAsync().then(data => data.a + data.b);
}

callbacks (i hate them ;)):

function myFunc(cb) {
  loadStuffAsync((err, data) => {
    if (err) {
      return cb(err);
    }

    cb(null, data.a + data.b);
  });
}

streams:

tcpSocket
  .pipe(decodeRequest) // byte stream ->[decodeRequest]-> Request object stream
  .pipe(handleRequest) // Request object stream ->[handleRequest]-> Response object stream
  .pipe(encodeResponse) // Response object stream ->[encodeResponse]-> byte stream
  .pipe(tcpSocket)
  .on('error', handleErrors);

or is there already a nice stream implementation in rust? capable of...

  • highWaterMark with push back: pauses the previous handler when the queue of the current one is full
  • multitasking: each handler is executed in its own coroutine/thread
  • byte & object streams
  • error handling

chpio commented Dec 27, 2015

we could standardize the interface in the core and let lib developers do the implementations. that way every one would use that one "blessed" interface but we could have multiple competing implementations (it may be a good idea, i dont know :)).

Also there could be multiple async-api-abstraction-levels just like in JS:

async-await:

async function myFunc() {
  const data = await loadStuffAsync();
  return data.a + data.b;
}

promises:

function myFunc() {
  return loadStuffAsync().then(data => data.a + data.b);
}

callbacks (i hate them ;)):

function myFunc(cb) {
  loadStuffAsync((err, data) => {
    if (err) {
      return cb(err);
    }

    cb(null, data.a + data.b);
  });
}

streams:

tcpSocket
  .pipe(decodeRequest) // byte stream ->[decodeRequest]-> Request object stream
  .pipe(handleRequest) // Request object stream ->[handleRequest]-> Response object stream
  .pipe(encodeResponse) // Response object stream ->[encodeResponse]-> byte stream
  .pipe(tcpSocket)
  .on('error', handleErrors);

or is there already a nice stream implementation in rust? capable of...

  • highWaterMark with push back: pauses the previous handler when the queue of the current one is full
  • multitasking: each handler is executed in its own coroutine/thread
  • byte & object streams
  • error handling
@boazsegev

This comment has been minimized.

Show comment
Hide comment
@boazsegev

boazsegev Dec 27, 2015

Looking over the code for the mio crate, I noticed that some unsafe code was required to implement the epoll/kqueue system calls that allow for evented IO (mio isn't purely Async IO, as it still uses blocking IO methods)...

It seems to me that unsafe code should be limited, as much as possible, to Rust's core and FFI implementations.

The "trust me" paradigm is different when developers are asked to trust Rust's core team vs. when they are asked to trust in third parties.

I doubt that competitive implementations, as suggested by @chpio , would do a better job at promoting a performant solution... although it could, possibly, be used to select a most performant solution for the underlying core library.

Ruby on Rails is a good example of how a less performant solution (although more comfortably designed) could win in a competitive environment.

Looking over the code for the mio crate, I noticed that some unsafe code was required to implement the epoll/kqueue system calls that allow for evented IO (mio isn't purely Async IO, as it still uses blocking IO methods)...

It seems to me that unsafe code should be limited, as much as possible, to Rust's core and FFI implementations.

The "trust me" paradigm is different when developers are asked to trust Rust's core team vs. when they are asked to trust in third parties.

I doubt that competitive implementations, as suggested by @chpio , would do a better job at promoting a performant solution... although it could, possibly, be used to select a most performant solution for the underlying core library.

Ruby on Rails is a good example of how a less performant solution (although more comfortably designed) could win in a competitive environment.

@seanmonstar

This comment has been minimized.

Show comment
Hide comment
@seanmonstar

seanmonstar Dec 27, 2015

Contributor

There's nothing wrong with unsafe code. A crate that uses it shouldn't be discouraged. Unsafe is required whenever memory safety is sufficiently complicated that the compiler cannot reason about it.

In this specific case though, unsafe is used because Rust demands all ffi code be marked unsafe. The compiler cannot reason about functions defined by other languages. You will never have code that uses epoll without unsafe (even if that unsafety were eventually tucked into a module in libstd).

Contributor

seanmonstar commented Dec 27, 2015

There's nothing wrong with unsafe code. A crate that uses it shouldn't be discouraged. Unsafe is required whenever memory safety is sufficiently complicated that the compiler cannot reason about it.

In this specific case though, unsafe is used because Rust demands all ffi code be marked unsafe. The compiler cannot reason about functions defined by other languages. You will never have code that uses epoll without unsafe (even if that unsafety were eventually tucked into a module in libstd).

@boazsegev

This comment has been minimized.

Show comment
Hide comment
@boazsegev

boazsegev Dec 27, 2015

@seanmonstar - On the main part, I agree with your assessment.

However... Rust's main selling point is safety and I do believe that forcing third parties to write unsafe code does hurt the sales pitch. Also, unsafe code written by third parties isn't perceived as trust-worthy as unsafe code within the core library.

I'm aware that it's impossible to use the epoll and kqueue API without using unsafe code and this is part of the reason I believe that writing an Async IO library would help utilize low level system calls while promoting Rust's main feature (safety).

Having said that, I'm just one voice. Both opinions have their cons and pros and both opinions are legitimate.

@seanmonstar - On the main part, I agree with your assessment.

However... Rust's main selling point is safety and I do believe that forcing third parties to write unsafe code does hurt the sales pitch. Also, unsafe code written by third parties isn't perceived as trust-worthy as unsafe code within the core library.

I'm aware that it's impossible to use the epoll and kqueue API without using unsafe code and this is part of the reason I believe that writing an Async IO library would help utilize low level system calls while promoting Rust's main feature (safety).

Having said that, I'm just one voice. Both opinions have their cons and pros and both opinions are legitimate.

@camlorn

This comment has been minimized.

Show comment
Hide comment
@camlorn

camlorn Dec 30, 2015

I'm not sure this is the place and maybe I need to open a separate issue somewhere, but since we don't seem to have it, I'd rate some sort of abstraction over at least socket select as super important. I got to this issue by looking for that and finding other issues that linked here indirectly from 2014; since I see at least one other comment here saying the same thing, I figure I'd add my two cents. Before I go on, I should admit that I'm still from the outside looking in; I really, really want to use Rust and plan to do so in the immediate future, but haven't yet. My primary is C++ and my secondary is Python.
While an async I/O library is really a very good idea and definitely gets a +1 from me if only because Python proves that not putting it in the language/standard library will cause epic-level fragmentation, the lack of a standard library select means that I have to essentially opt into some sort of third party crate or write my own abstraction over the calls. I'm considering Rust for the development of a network protocol as a learning project and can spend whatever time I choose, so I have some flexibility in this regard. But the inability to easily find a platform-neutral select in the standard library is leaving a bad taste in my mouth right now.
I'd say that getting select in at least for sockets as soon as possible would close a rather big and critical hole, as the only other alternatives that don't involve third-party libraries that I'm seeing seem to involve either a thread for every connection or fiddling around with timeouts. In the latter case, the documentation says the error depends on the platform--I get to write yet another abstraction layer! I'll probably opt into Mio for my current project, but I still consider this a shortcoming because all I really need is select.
Speaking more generally, having used both Twisted and Gevent some (though admittedly not enough to be called an expert), I like the look of Asyncio and think copying/borrowing from it might be a good starting point. Twisted always degenerated to inlineCallbacks and gevent always became all the difficulties of threads but with the "advantage" that it lies about this, offering mostly false hope. Since other languages seem to be converging on Asyncio, any solution that looks like it would get my admittedly far-from-expert vote. I'd go so far as to say that we should do it by implementing Python-style generators/coroutines, but that's probably a separate issue and I'm certainly nowhere near thinking about starting any RFCs at the moment.

camlorn commented Dec 30, 2015

I'm not sure this is the place and maybe I need to open a separate issue somewhere, but since we don't seem to have it, I'd rate some sort of abstraction over at least socket select as super important. I got to this issue by looking for that and finding other issues that linked here indirectly from 2014; since I see at least one other comment here saying the same thing, I figure I'd add my two cents. Before I go on, I should admit that I'm still from the outside looking in; I really, really want to use Rust and plan to do so in the immediate future, but haven't yet. My primary is C++ and my secondary is Python.
While an async I/O library is really a very good idea and definitely gets a +1 from me if only because Python proves that not putting it in the language/standard library will cause epic-level fragmentation, the lack of a standard library select means that I have to essentially opt into some sort of third party crate or write my own abstraction over the calls. I'm considering Rust for the development of a network protocol as a learning project and can spend whatever time I choose, so I have some flexibility in this regard. But the inability to easily find a platform-neutral select in the standard library is leaving a bad taste in my mouth right now.
I'd say that getting select in at least for sockets as soon as possible would close a rather big and critical hole, as the only other alternatives that don't involve third-party libraries that I'm seeing seem to involve either a thread for every connection or fiddling around with timeouts. In the latter case, the documentation says the error depends on the platform--I get to write yet another abstraction layer! I'll probably opt into Mio for my current project, but I still consider this a shortcoming because all I really need is select.
Speaking more generally, having used both Twisted and Gevent some (though admittedly not enough to be called an expert), I like the look of Asyncio and think copying/borrowing from it might be a good starting point. Twisted always degenerated to inlineCallbacks and gevent always became all the difficulties of threads but with the "advantage" that it lies about this, offering mostly false hope. Since other languages seem to be converging on Asyncio, any solution that looks like it would get my admittedly far-from-expert vote. I'd go so far as to say that we should do it by implementing Python-style generators/coroutines, but that's probably a separate issue and I'm certainly nowhere near thinking about starting any RFCs at the moment.

@tailhook

This comment has been minimized.

Show comment
Hide comment
@tailhook

tailhook Dec 30, 2015

While an async I/O library is really a very good idea and definitely gets a +1 from me if only because Python proves that not putting it in the language/standard library will cause epic-level fragmentation

It turns out that python proves quite contrary. There was asyncore in python. But it was quickly obsoleted by twisted (and tornado/gevent... much later). And now there is an asyncio which may be obsoleted by curio, as the latter looks like much nicer by design (but still too early to reason about it's success).

At the end of the day, implementing yield-like construct looks promising. But it's too early to put any async IO library code into the stdlib. There are virtually no downsides of using external crates for the task.

While an async I/O library is really a very good idea and definitely gets a +1 from me if only because Python proves that not putting it in the language/standard library will cause epic-level fragmentation

It turns out that python proves quite contrary. There was asyncore in python. But it was quickly obsoleted by twisted (and tornado/gevent... much later). And now there is an asyncio which may be obsoleted by curio, as the latter looks like much nicer by design (but still too early to reason about it's success).

At the end of the day, implementing yield-like construct looks promising. But it's too early to put any async IO library code into the stdlib. There are virtually no downsides of using external crates for the task.

@gotgenes

This comment has been minimized.

Show comment
Hide comment
@gotgenes

gotgenes Dec 30, 2015

At the end of the day, implementing yield-like construct looks promising.

As mentioned earlier, Python has already moved on from yield from to async/await as the syntax of choice for asynchronous operations, for reasons outlined in PEP-492.

I'd like to second pointing out curio as an interesting new model for async in Python.

At the end of the day, implementing yield-like construct looks promising.

As mentioned earlier, Python has already moved on from yield from to async/await as the syntax of choice for asynchronous operations, for reasons outlined in PEP-492.

I'd like to second pointing out curio as an interesting new model for async in Python.

@camlorn

This comment has been minimized.

Show comment
Hide comment
@camlorn

camlorn Jan 1, 2016

This is interesting. I've never heard of asyncore before now, but it looks like a very complicated way to use a select call. I'm not surprised that it didn't become popular, especially given the 1999 release date (I found one source placing it at Python 1.5.2, but can't find official confirmation). I'm not very convinced that it's good evidence that I'm wrong about Python proving the fragmentation point. I'm not saying that I'm right, just that I need more convincing before dropping my viewpoint as incorrect. In my opinion, something okay with many protocols is better than 5 or 6 options, each more amazing than the last, but each supporting different protocols.
I still think my point about select stands and that it should be put into the standard library as soon as possible. It or something like it is the first step to an async framework. The disadvantage of opting into an external crate for async I/O when all you need is select is that everyone needs to learn the external crate; by contrast, select is a very simple call and can be explained in a few paragraphs. Even if a crate containing only select exists, though, I fail to see any disadvantage to adding it to std::net in some form.

camlorn commented Jan 1, 2016

This is interesting. I've never heard of asyncore before now, but it looks like a very complicated way to use a select call. I'm not surprised that it didn't become popular, especially given the 1999 release date (I found one source placing it at Python 1.5.2, but can't find official confirmation). I'm not very convinced that it's good evidence that I'm wrong about Python proving the fragmentation point. I'm not saying that I'm right, just that I need more convincing before dropping my viewpoint as incorrect. In my opinion, something okay with many protocols is better than 5 or 6 options, each more amazing than the last, but each supporting different protocols.
I still think my point about select stands and that it should be put into the standard library as soon as possible. It or something like it is the first step to an async framework. The disadvantage of opting into an external crate for async I/O when all you need is select is that everyone needs to learn the external crate; by contrast, select is a very simple call and can be explained in a few paragraphs. Even if a crate containing only select exists, though, I fail to see any disadvantage to adding it to std::net in some form.

@grigio

This comment has been minimized.

Show comment
Hide comment
@grigio

grigio Jan 2, 2016

It seems that async-await style is popular as non-blocking pattern.

grigio commented Jan 2, 2016

It seems that async-await style is popular as non-blocking pattern.

@szagi3891

This comment has been minimized.

Show comment
Hide comment
@szagi3891

szagi3891 Jan 3, 2016

I think that too much of combining of convention.
Suffice to answer the Async IO returns was channel.

I think that too much of combining of convention.
Suffice to answer the Async IO returns was channel.

@boazsegev

This comment has been minimized.

Show comment
Hide comment
@boazsegev

boazsegev Jan 3, 2016

I think we're over thinking it... for now,how about having non-blocking sockets return with an EWOULDBLOCK and having core wrappers for kqueue, epoll and select (system dependent, of course) with a preprocessor able to inform us which system is available and which isn't... At least let Rust be independent of C knowhow as much as possible.

P.S.

I would probably wrap epoll and kqueue in a single wrapper and select in another.

I think we're over thinking it... for now,how about having non-blocking sockets return with an EWOULDBLOCK and having core wrappers for kqueue, epoll and select (system dependent, of course) with a preprocessor able to inform us which system is available and which isn't... At least let Rust be independent of C knowhow as much as possible.

P.S.

I would probably wrap epoll and kqueue in a single wrapper and select in another.

@sconger

This comment has been minimized.

Show comment
Hide comment
@sconger

sconger Jan 31, 2016

I feel it's worth pointing out that some operating systems have started moving away from the poll/select model. There has been a trend toward the system managing IO and threads together. Windows has tied its newer asynchronous IO to windows thread pools, and Darwin has done similarly with grand central dispatch. Instead of waiting for an event, you ask the OS to do something, and it triggers a callback on a worker thread when the work is done.

I don't see Rust being able to support those APIs without something like async/await. They don't fit in with the current safety mechanisms.

sconger commented Jan 31, 2016

I feel it's worth pointing out that some operating systems have started moving away from the poll/select model. There has been a trend toward the system managing IO and threads together. Windows has tied its newer asynchronous IO to windows thread pools, and Darwin has done similarly with grand central dispatch. Instead of waiting for an event, you ask the OS to do something, and it triggers a callback on a worker thread when the work is done.

I don't see Rust being able to support those APIs without something like async/await. They don't fit in with the current safety mechanisms.

@retep998

This comment has been minimized.

Show comment
Hide comment
@retep998

retep998 Jan 31, 2016

Member

Pretty much all async on Windows is completion based in that you ask it to do something, and it does it, and then gets back to you. The only options you really have are how it gets back to you. Whether you use the wait functions on overlapped events, or have an APC fire, or a callback in a thread pool, or a completion packet to an IOCP.

Member

retep998 commented Jan 31, 2016

Pretty much all async on Windows is completion based in that you ask it to do something, and it does it, and then gets back to you. The only options you really have are how it gets back to you. Whether you use the wait functions on overlapped events, or have an APC fire, or a callback in a thread pool, or a completion packet to an IOCP.

@tikue

This comment has been minimized.

Show comment
Hide comment
@tikue

tikue Feb 7, 2016

How would the borrow checker interact with async/await? If I borrow self mutably and then await some computation whose result will be combined with self, presumably other async states won't be able to borrow self? Would you need to drop any borrows before awaiting?

tikue commented Feb 7, 2016

How would the borrow checker interact with async/await? If I borrow self mutably and then await some computation whose result will be combined with self, presumably other async states won't be able to borrow self? Would you need to drop any borrows before awaiting?

@l0calh05t

This comment has been minimized.

Show comment
Hide comment
@l0calh05t

l0calh05t Feb 7, 2016

If I borrow self mutably and then await some computation whose result will be combined with self, presumably other async states won't be able to borrow self?

That's what I would expect.

If I borrow self mutably and then await some computation whose result will be combined with self, presumably other async states won't be able to borrow self?

That's what I would expect.

@akcom

This comment has been minimized.

Show comment
Hide comment
@akcom

akcom Mar 4, 2016

@retep998 I have to disagree. While IO completion ports are certainly a high throughput asynchronous method, event polling is well-supported by both files and sockets.

akcom commented Mar 4, 2016

@retep998 I have to disagree. While IO completion ports are certainly a high throughput asynchronous method, event polling is well-supported by both files and sockets.

@retep998

This comment has been minimized.

Show comment
Hide comment
@retep998

retep998 Mar 4, 2016

Member

@akcom Really? Windows provides event polling that isn't completely awful (select doesn't count)? Can you provide some examples of this?

Member

retep998 commented Mar 4, 2016

@akcom Really? Windows provides event polling that isn't completely awful (select doesn't count)? Can you provide some examples of this?

@akcom

This comment has been minimized.

Show comment
Hide comment

akcom commented Mar 4, 2016

@retep998 WaitForMultipleObjectsEx can be used for polling but also for overlapped operations.

@retep998

This comment has been minimized.

Show comment
Hide comment
@retep998

retep998 Mar 4, 2016

Member

@akcom Ah, for a moment I thought you meant Windows provided readiness based async on Windows. What you're actually saying is that Windows provides ways to receive notification of completed overlapped operations other than IOCPs which is actually what I said in my other message, so I'm not entirely sure what your point is.

The only options you really have are how it gets back to you. Whether you use the wait functions on overlapped events, or have an APC fire, or a callback in a thread pool, or a completion packet to an IOCP.

Note that you cannot use the wait functions to determine when you can read/write a file without blocking, you have to first start an operation asynchronously and then use some method to be notified of when the operation is done.

Member

retep998 commented Mar 4, 2016

@akcom Ah, for a moment I thought you meant Windows provided readiness based async on Windows. What you're actually saying is that Windows provides ways to receive notification of completed overlapped operations other than IOCPs which is actually what I said in my other message, so I'm not entirely sure what your point is.

The only options you really have are how it gets back to you. Whether you use the wait functions on overlapped events, or have an APC fire, or a callback in a thread pool, or a completion packet to an IOCP.

Note that you cannot use the wait functions to determine when you can read/write a file without blocking, you have to first start an operation asynchronously and then use some method to be notified of when the operation is done.

@camlorn

This comment has been minimized.

Show comment
Hide comment
@camlorn

camlorn Sep 13, 2016

@eddyb
Supposing you build this up from futures-rs combinators, how do you translate an infinite loop? The only way I see to do it is to do what I'm suggesting, but hide the fact that you're doing what I'm suggesting inside the async fn.

I suppose maybe you can do something with streams, but streams don't seem like they can represent any loop, just some predefined ones.

@lnicola
I know how it works. But polling can be converted into a callback model just by polling and then calling the callback. What I'm saying is that you hook the iterator of futures onto the first future with then, then poll the entire mega-future, essentially. As long as someone polls it, things advance.

More broadly, lots of these models can be converted into each other if you have enough language features, and Rust almost does.

camlorn commented Sep 13, 2016

@eddyb
Supposing you build this up from futures-rs combinators, how do you translate an infinite loop? The only way I see to do it is to do what I'm suggesting, but hide the fact that you're doing what I'm suggesting inside the async fn.

I suppose maybe you can do something with streams, but streams don't seem like they can represent any loop, just some predefined ones.

@lnicola
I know how it works. But polling can be converted into a callback model just by polling and then calling the callback. What I'm saying is that you hook the iterator of futures onto the first future with then, then poll the entire mega-future, essentially. As long as someone polls it, things advance.

More broadly, lots of these models can be converted into each other if you have enough language features, and Rust almost does.

@eddyb

This comment has been minimized.

Show comment
Hide comment
@eddyb

eddyb Sep 13, 2016

Member

@camlorn You don't want to get anywhere near a callback model because it just doesn't scale.
(You've noticed already how things end up leaking into the signature, unlike the readiness model).

You also wouldn't build async fn from combinators, this is where state machine transforms come in.
But if you wanted to, you can always write your own infinite loop Future or whatever else you need.

Member

eddyb commented Sep 13, 2016

@camlorn You don't want to get anywhere near a callback model because it just doesn't scale.
(You've noticed already how things end up leaking into the signature, unlike the readiness model).

You also wouldn't build async fn from combinators, this is where state machine transforms come in.
But if you wanted to, you can always write your own infinite loop Future or whatever else you need.

@Ericson2314

This comment has been minimized.

Show comment
Hide comment
@Ericson2314

Ericson2314 Sep 13, 2016

Contributor

It seems the completion model isn't so bad if you accept you can't do join but can do fork. I might make a pasts-rs crate for this :D.

Contributor

Ericson2314 commented Sep 13, 2016

It seems the completion model isn't so bad if you accept you can't do join but can do fork. I might make a pasts-rs crate for this :D.

@camlorn

This comment has been minimized.

Show comment
Hide comment
@camlorn

camlorn Sep 13, 2016

If the goal is "looks like imperative code", then futures aren't a good model. In my ideal world, it would be like threads, but with very explicit points at which the context can switch. I don't see this as too big of a deal to implement. I'm going to lay out all the moving bits I see, and then if people still see major problems I'll go away. But I think some of this is that I'm not being clear. Anyhow:

  • An async fn is a function written async fn foo(params) -> Res, converted roughly into fn foo(params) -> impl Async<Res>.
  • An Async<T> implements a hypothetical trait StreamingIterator. There is an event loop that knows how to deal with the things Async<T> can generate, probably references to something implementing Future or whatever your preferred term is and probably using a poll-based model. To make this work, Async<T> probably provides an extra method to reget the most recent reference; this deals with the inability to have a vector of references all with different lifetimes. I anticipate that the event loop will put these in boxes.
  • Every time through the loop, we go through the most recent references from each Async<T> in turn and see who is ready. If one of them is, then we tell that Async<T> to advance and give us the next one.
  • The await keyword works on any Async<T>. awaitais roughlyfor i in a { yield i }. Put another way, await flattens anotherAsyncinto this one. The result ofawaitis the final result of theAsyncwe awaited. Since we're using trait objects and it's a streaming iterator, this can work out even if they're providing results of different types. Once theAsyncwe awaited is exhausted, we extract the result, probably stored internally in theAsync`.
  • In order to deal with incoming connections and similar, the event loop provides a method whereby a new top-level Async<T> can be registered at any time. In order to support joining, this function provides something implementing Async<T> in turn that will simply block the current task until the newly-created one is finished. I leave making the event loop available inside tasks to the reader, as this isn't very hard.
  • Finally, to actually pass out the result, a task uses the return keyword. When a task that is being handled directly by the event loop gives a result, the event loop forgets the Async<T>. This lets one continue spawning top-level tasks forever without running out of ram.

If you have all of these pieces, something like the following pseudocode can work:

async fn server() {
    let listener = await tcp_listener("localhost:10000");
    loop {
        let sock = await listener.incoming_connection();
        spawn do_something_with_socket(sock);
    }
}

Nothing leaks into the signature of a async fn before transformation. This happens after transformation and is done by the compiler. My model should be nearly as fast as a hand-written FSM, supports joining, and allows one to write infinite loops directly. It probably also has a horrible downside. Feel free to point that out now. The most immediate one I see is that possibly T in an Async<T> needs to be copy.

Also, if I'm rehashing earlier discussion, tell me. Perhaps this has all been considered and laid out and dismissed as a bad idea already.

camlorn commented Sep 13, 2016

If the goal is "looks like imperative code", then futures aren't a good model. In my ideal world, it would be like threads, but with very explicit points at which the context can switch. I don't see this as too big of a deal to implement. I'm going to lay out all the moving bits I see, and then if people still see major problems I'll go away. But I think some of this is that I'm not being clear. Anyhow:

  • An async fn is a function written async fn foo(params) -> Res, converted roughly into fn foo(params) -> impl Async<Res>.
  • An Async<T> implements a hypothetical trait StreamingIterator. There is an event loop that knows how to deal with the things Async<T> can generate, probably references to something implementing Future or whatever your preferred term is and probably using a poll-based model. To make this work, Async<T> probably provides an extra method to reget the most recent reference; this deals with the inability to have a vector of references all with different lifetimes. I anticipate that the event loop will put these in boxes.
  • Every time through the loop, we go through the most recent references from each Async<T> in turn and see who is ready. If one of them is, then we tell that Async<T> to advance and give us the next one.
  • The await keyword works on any Async<T>. awaitais roughlyfor i in a { yield i }. Put another way, await flattens anotherAsyncinto this one. The result ofawaitis the final result of theAsyncwe awaited. Since we're using trait objects and it's a streaming iterator, this can work out even if they're providing results of different types. Once theAsyncwe awaited is exhausted, we extract the result, probably stored internally in theAsync`.
  • In order to deal with incoming connections and similar, the event loop provides a method whereby a new top-level Async<T> can be registered at any time. In order to support joining, this function provides something implementing Async<T> in turn that will simply block the current task until the newly-created one is finished. I leave making the event loop available inside tasks to the reader, as this isn't very hard.
  • Finally, to actually pass out the result, a task uses the return keyword. When a task that is being handled directly by the event loop gives a result, the event loop forgets the Async<T>. This lets one continue spawning top-level tasks forever without running out of ram.

If you have all of these pieces, something like the following pseudocode can work:

async fn server() {
    let listener = await tcp_listener("localhost:10000");
    loop {
        let sock = await listener.incoming_connection();
        spawn do_something_with_socket(sock);
    }
}

Nothing leaks into the signature of a async fn before transformation. This happens after transformation and is done by the compiler. My model should be nearly as fast as a hand-written FSM, supports joining, and allows one to write infinite loops directly. It probably also has a horrible downside. Feel free to point that out now. The most immediate one I see is that possibly T in an Async<T> needs to be copy.

Also, if I'm rehashing earlier discussion, tell me. Perhaps this has all been considered and laid out and dismissed as a bad idea already.

@eddyb

This comment has been minimized.

Show comment
Hide comment
@eddyb

eddyb Sep 13, 2016

Member

@camlorn You describe all of that complexity, but a lot of it just goes away with the readiness model.
Each async fn results in a future, which can be seen as an iterator of Wait | Return(x) | Error(e).

That's already in the realm of existing generator implementations (i.e. ES6), which could be implemented in Rust (without needing streaming iterators) and then used as the basis for async fn.

Member

eddyb commented Sep 13, 2016

@camlorn You describe all of that complexity, but a lot of it just goes away with the readiness model.
Each async fn results in a future, which can be seen as an iterator of Wait | Return(x) | Error(e).

That's already in the realm of existing generator implementations (i.e. ES6), which could be implemented in Rust (without needing streaming iterators) and then used as the basis for async fn.

@amluto

This comment has been minimized.

Show comment
Hide comment
@amluto

amluto Sep 13, 2016

@eddyb: I'm still a bit mystified by the idea of using an iterator for this. I agree that, in the readiness model, having a function that returns Wait | Return(x) | Error(e) works, but I think that calling that function next() and letting you iterate over it with for is confusing. Also, Iterator::next() returns Option, so you'd have to change your signature, and you may end up in an awkward situation where:

for &i in some_async { /* do nothing */ }

will panic.

If it could be made to work cleanly and efficiently, I think that:

enum<A> PollResult
{
    NotReady(A),
    Ready(A::Item),
    Err(A::Error),
}

impl<...> Async<...> {
    fn poll(self) -> PollResult<Self>;
}

would be nice because you get rid of the panic case entirely: you can't possibly call poll() too many times.

amluto commented Sep 13, 2016

@eddyb: I'm still a bit mystified by the idea of using an iterator for this. I agree that, in the readiness model, having a function that returns Wait | Return(x) | Error(e) works, but I think that calling that function next() and letting you iterate over it with for is confusing. Also, Iterator::next() returns Option, so you'd have to change your signature, and you may end up in an awkward situation where:

for &i in some_async { /* do nothing */ }

will panic.

If it could be made to work cleanly and efficiently, I think that:

enum<A> PollResult
{
    NotReady(A),
    Ready(A::Item),
    Err(A::Error),
}

impl<...> Async<...> {
    fn poll(self) -> PollResult<Self>;
}

would be nice because you get rid of the panic case entirely: you can't possibly call poll() too many times.

@eddyb

This comment has been minimized.

Show comment
Hide comment
@eddyb

eddyb Sep 14, 2016

Member

@amluto I'm not saying it should be an iterator, just that it has a compatible API.
A generalized Generator API would fit even better:

enum Yield<T, R> {
    Value(T),
    Return(R)
}

trait Generator {
    type Value;
    type Return;
    fn advance(&mut self) -> Yield<Self::Value, Self::Return>;
}

impl<T, G: Generator<Value=T, Return=()>> Iterator for G {
    type Item = T;
    fn next(&mut self) -> Option<T> {
        match self.advance() {
            Yield::Value(x) => Some(x),
            Yield::Return(()) => None
        }
    }
}

impl<T, E, G: Generator<Value=NotReady, Return=Result<T, E>>> Future for G {
    type Item = T;
    type Error = E;
    fn poll(&mut self) -> Poll<T, E> {
        match self.advance() {
            Yield::Value(NotReady) => Ok(Async::NotReady),
            Yield::Return(Ok(x)) => Ok(Async::Ready(x)),
            Yield::Return(Err(e)) => Err(e)
        }
    }
}
Member

eddyb commented Sep 14, 2016

@amluto I'm not saying it should be an iterator, just that it has a compatible API.
A generalized Generator API would fit even better:

enum Yield<T, R> {
    Value(T),
    Return(R)
}

trait Generator {
    type Value;
    type Return;
    fn advance(&mut self) -> Yield<Self::Value, Self::Return>;
}

impl<T, G: Generator<Value=T, Return=()>> Iterator for G {
    type Item = T;
    fn next(&mut self) -> Option<T> {
        match self.advance() {
            Yield::Value(x) => Some(x),
            Yield::Return(()) => None
        }
    }
}

impl<T, E, G: Generator<Value=NotReady, Return=Result<T, E>>> Future for G {
    type Item = T;
    type Error = E;
    fn poll(&mut self) -> Poll<T, E> {
        match self.advance() {
            Yield::Value(NotReady) => Ok(Async::NotReady),
            Yield::Return(Ok(x)) => Ok(Async::Ready(x)),
            Yield::Return(Err(e)) => Err(e)
        }
    }
}
@camlorn

This comment has been minimized.

Show comment
Hide comment
@camlorn

camlorn Sep 14, 2016

So everyone knows, I now side with @eddyb. This was half a miscommunication and half futures-rs having changed their design near the beginning of this month, after I stopped looking closely at it.

I want generators too. Has anyone given thought to focusing effort there instead? It seems to me that it's much smaller in scope, but has the potential to provide the tools for people to easily prototype these ideas. Also, it would make my life so much easier next time I have to write an iterator, though that's not really the issue at hand.

I've considered doing a generator RFC after the limited HKT one is finished with, but doubt I can. I mostly see what the code transformation is, but I think that sometimes you need it to automatically become a streaming iterator, and I'm not sure how that should work.

camlorn commented Sep 14, 2016

So everyone knows, I now side with @eddyb. This was half a miscommunication and half futures-rs having changed their design near the beginning of this month, after I stopped looking closely at it.

I want generators too. Has anyone given thought to focusing effort there instead? It seems to me that it's much smaller in scope, but has the potential to provide the tools for people to easily prototype these ideas. Also, it would make my life so much easier next time I have to write an iterator, though that's not really the issue at hand.

I've considered doing a generator RFC after the limited HKT one is finished with, but doubt I can. I mostly see what the code transformation is, but I think that sometimes you need it to automatically become a streaming iterator, and I'm not sure how that should work.

@brettcannon

This comment has been minimized.

Show comment
Hide comment
@brettcannon

brettcannon Nov 10, 2016

In case this issue starts up again or another one is created somewhere else on the topic of async I/O, as a Python core developer I wanted to offer to answer any Python questions people may have. You can also always email the async-sig mailing list with questions relating to asynchronous programming in Python. And if you want to know how async/await works in Python I wrote http://www.snarky.ca/how-the-heck-does-async-await-work-in-python-3-5 (summary: it's an API unlike most other languages that implicitly give you an event loop).

In case this issue starts up again or another one is created somewhere else on the topic of async I/O, as a Python core developer I wanted to offer to answer any Python questions people may have. You can also always email the async-sig mailing list with questions relating to asynchronous programming in Python. And if you want to know how async/await works in Python I wrote http://www.snarky.ca/how-the-heck-does-async-await-work-in-python-3-5 (summary: it's an API unlike most other languages that implicitly give you an event loop).

@steveklabnik

This comment has been minimized.

Show comment
Hide comment
@steveklabnik

steveklabnik Nov 10, 2016

Member

as a Python core developer I wanted to offer to answer any Python questions people may have.

Thank you so much @brettcannon !

Member

steveklabnik commented Nov 10, 2016

as a Python core developer I wanted to offer to answer any Python questions people may have.

Thank you so much @brettcannon !

@njsmith

This comment has been minimized.

Show comment
Hide comment
@njsmith

njsmith Nov 10, 2016

One thing that might be worth highlighting about Python's experience so far: we actually started out using async/await as a convenience layer on top of a traditional promise/futures-oriented API, like how async/await look in C# and JS. But in the mean time, people have started exploring other approaches and it's possible we'll actually move away from that approach. I wrote a long blog post making the case for this. I'm sure not all of it carries over the rust context, but it might be of interest in any case.

njsmith commented Nov 10, 2016

One thing that might be worth highlighting about Python's experience so far: we actually started out using async/await as a convenience layer on top of a traditional promise/futures-oriented API, like how async/await look in C# and JS. But in the mean time, people have started exploring other approaches and it's possible we'll actually move away from that approach. I wrote a long blog post making the case for this. I'm sure not all of it carries over the rust context, but it might be of interest in any case.

@brettcannon

This comment has been minimized.

Show comment
Hide comment
@brettcannon

brettcannon Nov 10, 2016

I just want to second reading the blog post by @njsmith as it explains why Python might be shifting how we implement event loops while not having to change async/await thanks to how it's just an API to Python.

I just want to second reading the blog post by @njsmith as it explains why Python might be shifting how we implement event loops while not having to change async/await thanks to how it's just an API to Python.

@eddyb

This comment has been minimized.

Show comment
Hide comment
@eddyb

eddyb Nov 10, 2016

Member

@njsmith The futures-rs effort (with tokio on top) is readiness-based (i.e. poll), not using callbacks.
It seems to me that Python moved from callbacks to something more akin a state machine?
In which case, I don't believe Rust ever had anything serious built on top of callbacks.

Member

eddyb commented Nov 10, 2016

@njsmith The futures-rs effort (with tokio on top) is readiness-based (i.e. poll), not using callbacks.
It seems to me that Python moved from callbacks to something more akin a state machine?
In which case, I don't believe Rust ever had anything serious built on top of callbacks.

@njsmith

This comment has been minimized.

Show comment
Hide comment
@njsmith

njsmith Nov 11, 2016

@eddyb: futures-rs certainly seems to use callbacks, in the sense that I see lots of f: F arguments in the Future trait? But I'm definitely not enough of an expert on rust or futures-rs to say anything definitive about how the Python experience does or doesn't carry over.

njsmith commented Nov 11, 2016

@eddyb: futures-rs certainly seems to use callbacks, in the sense that I see lots of f: F arguments in the Future trait? But I'm definitely not enough of an expert on rust or futures-rs to say anything definitive about how the Python experience does or doesn't carry over.

@eddyb

This comment has been minimized.

Show comment
Hide comment
@eddyb

eddyb Nov 11, 2016

Member

@njsmith Those are adapters, e.g. imagine map(self, f) returning a f(await self) async fn.
The fundamental API is the poll method, everything else deals with building state machines for the readiness model without language-level async fn sugar.

This is akin to how Iterator is "internal" (i.e. you call .next() to get a value instead of being called back) and we have Iterator adapters like map and filter, but we still haven't added generators.
In fact, I expect async fn to be built on top of some more general formulation of generators.

Member

eddyb commented Nov 11, 2016

@njsmith Those are adapters, e.g. imagine map(self, f) returning a f(await self) async fn.
The fundamental API is the poll method, everything else deals with building state machines for the readiness model without language-level async fn sugar.

This is akin to how Iterator is "internal" (i.e. you call .next() to get a value instead of being called back) and we have Iterator adapters like map and filter, but we still haven't added generators.
In fact, I expect async fn to be built on top of some more general formulation of generators.

@gotgenes

This comment has been minimized.

Show comment
Hide comment
@gotgenes

gotgenes Nov 11, 2016

@njsmith I actually came here to provide a link to your async/await blog post to augment discussion, as it was incredibly thoughtful, but you beat me to it!

I'll add that @mitsuhiko also recently wrote a blog post on async in Python. Maybe particularly pertinent to this thread are his thoughts on the overloading iterators.

@njsmith I actually came here to provide a link to your async/await blog post to augment discussion, as it was incredibly thoughtful, but you beat me to it!

I'll add that @mitsuhiko also recently wrote a blog post on async in Python. Maybe particularly pertinent to this thread are his thoughts on the overloading iterators.

@brettcannon

This comment has been minimized.

Show comment
Hide comment
@brettcannon

brettcannon Nov 11, 2016

The post by @mitsuhiko is specifically about asyncio and targeted at library authors (to put it in proper context).

The post by @mitsuhiko is specifically about asyncio and targeted at library authors (to put it in proper context).

@eddyb

This comment has been minimized.

Show comment
Hide comment
@eddyb

eddyb Jan 7, 2017

Member

You forgot loop { yield None; } at the end (or yield None; panic!();). Too much boilerplate IMO.

Member

eddyb commented Jan 7, 2017

You forgot loop { yield None; } at the end (or yield None; panic!();). Too much boilerplate IMO.

@c0b

This comment has been minimized.

Show comment
Hide comment
@c0b

c0b Oct 7, 2017

any update in 2017?

c0b commented Oct 7, 2017

any update in 2017?

@steveklabnik

This comment has been minimized.

Show comment
Hide comment
Member

steveklabnik commented Oct 7, 2017

And https://tokio.rs/ generally

@c0b

This comment has been minimized.

Show comment
Hide comment
@c0b

c0b Oct 8, 2017

can https://tokio.rs/ be used together with the above async await? I'm not seeing an example

c0b commented Oct 8, 2017

can https://tokio.rs/ be used together with the above async await? I'm not seeing an example

@chpio

This comment has been minimized.

Show comment
Hide comment
@chpio

chpio Oct 8, 2017

can https://tokio.rs/ be used together with the above async await?

Yeap, Tokio is also just using futures. futures is the library at the heart of all async operations in rust.

I'm not seeing an example

https://github.com/alexcrichton/futures-await/blob/master/examples/echo.rs

chpio commented Oct 8, 2017

can https://tokio.rs/ be used together with the above async await?

Yeap, Tokio is also just using futures. futures is the library at the heart of all async operations in rust.

I'm not seeing an example

https://github.com/alexcrichton/futures-await/blob/master/examples/echo.rs

@Ixrec

This comment has been minimized.

Show comment
Hide comment
@Ixrec

Ixrec Oct 8, 2017

Contributor

@c0b https://internals.rust-lang.org/t/help-test-async-await-generators-coroutines/5835 might be the best place to start if you're trying to use the async/await experiment currently on nightly.

Contributor

Ixrec commented Oct 8, 2017

@c0b https://internals.rust-lang.org/t/help-test-async-await-generators-coroutines/5835 might be the best place to start if you're trying to use the async/await experiment currently on nightly.

@leonerd

This comment has been minimized.

Show comment
Hide comment
@leonerd

leonerd Jan 19, 2018

If it's of any interest, I've been busy implementing this idea in Perl, and observing that Python, C#, JavaScript and Dart all also do basically the same thing.

Perl: https://metacpan.org/pod/Future::AsyncAwait
Python: https://docs.python.org/3/library/asyncio-task.html
C#: https://docs.microsoft.com/en-us/dotnet/csharp/async
JavaScript: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function
Dart: https://www.dartlang.org/articles/language/await-async

If you have something of an "official language overview" page similar to those above, I'd like to add it to my collection.

leonerd commented Jan 19, 2018

If it's of any interest, I've been busy implementing this idea in Perl, and observing that Python, C#, JavaScript and Dart all also do basically the same thing.

Perl: https://metacpan.org/pod/Future::AsyncAwait
Python: https://docs.python.org/3/library/asyncio-task.html
C#: https://docs.microsoft.com/en-us/dotnet/csharp/async
JavaScript: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function
Dart: https://www.dartlang.org/articles/language/await-async

If you have something of an "official language overview" page similar to those above, I'd like to add it to my collection.

@petrochenkov petrochenkov added T-libs and removed A-libs labels Jan 29, 2018

@Centril

This comment has been minimized.

Show comment
Hide comment
@Centril

Centril Feb 24, 2018

Contributor

Since discussion has moved on from here to tokio.rs, futures-rs, and other RFCs, I'll go ahead and close this issue.

Contributor

Centril commented Feb 24, 2018

Since discussion has moved on from here to tokio.rs, futures-rs, and other RFCs, I'll go ahead and close this issue.

@Centril Centril closed this Feb 24, 2018

@jnferner

This comment has been minimized.

Show comment
Hide comment
@jnferner

jnferner Feb 26, 2018

@Centril I think it would be useful to link those issues for readers stumbling across this one. Do you happen to have them at hand?

@Centril I think it would be useful to link those issues for readers stumbling across this one. Do you happen to have them at hand?

@Centril

This comment has been minimized.

Show comment
Hide comment
@Centril

Centril Feb 26, 2018

Contributor

Not so much about directly adding async IO directly to the standard library but rather enabling as crates:

Contributor

Centril commented Feb 26, 2018

Not so much about directly adding async IO directly to the standard library but rather enabling as crates:

@m13253

This comment has been minimized.

Show comment
Hide comment
@m13253

m13253 Feb 26, 2018

In reply to @Centril ,

Since discussion has moved on from here to tokio.rs, futures-rs, and other RFCs

This is what I (as well as some of us) are preventing from happening for years.
Some of us are actually preventing this move to third-party crates.

Lack of language-level async IO would cause incompatibility among third-party crates, as I have already stated before. Additionally, since Rust offers FFI, lack of language-level async IO would cause problems for C code to utilize concurrent programming.

In fact, nearly all languages have async IO. But only those who provide language-level concurrency (e.g. Go, Python 3, C#, etc.) may have more or less chance to win the "language for the cloud" battle. We all agree that Rust is a powerful language far more than the need to build a browser engine. But we don't want to see Rust losing the cloud battlefield, do we?

Really sorry if my language is offensive to you. But I disagree with your opinion that "we are relying on third-party crates". True concurrency requires implementations on compiler level that third-party crates cannot offer.


Update: Thank you for your response below. ❤️

m13253 commented Feb 26, 2018

In reply to @Centril ,

Since discussion has moved on from here to tokio.rs, futures-rs, and other RFCs

This is what I (as well as some of us) are preventing from happening for years.
Some of us are actually preventing this move to third-party crates.

Lack of language-level async IO would cause incompatibility among third-party crates, as I have already stated before. Additionally, since Rust offers FFI, lack of language-level async IO would cause problems for C code to utilize concurrent programming.

In fact, nearly all languages have async IO. But only those who provide language-level concurrency (e.g. Go, Python 3, C#, etc.) may have more or less chance to win the "language for the cloud" battle. We all agree that Rust is a powerful language far more than the need to build a browser engine. But we don't want to see Rust losing the cloud battlefield, do we?

Really sorry if my language is offensive to you. But I disagree with your opinion that "we are relying on third-party crates". True concurrency requires implementations on compiler level that third-party crates cannot offer.


Update: Thank you for your response below. ❤️

@Centril

This comment has been minimized.

Show comment
Hide comment
@Centril

Centril Feb 26, 2018

Contributor

@m13253

note: I'm only describing things as they are, not as they ought to be. =) Anyone is free to file full RFC proposals for libstd / language level async IO and we will judge those on their merits.

Contributor

Centril commented Feb 26, 2018

@m13253

note: I'm only describing things as they are, not as they ought to be. =) Anyone is free to file full RFC proposals for libstd / language level async IO and we will judge those on their merits.

@Ixrec

This comment has been minimized.

Show comment
Hide comment
@Ixrec

Ixrec Feb 26, 2018

Contributor

Note that "other RFCs" includes https://github.com/rust-lang/rfcs/blob/master/text/2033-experimental-coroutines.md, which basically is async/await syntax (albeit slightly less pretty since it's done with procedural macros for now). That seems like "language-level async IO" to me, even if it's not-so-secretly sugar over the futures library.

Contributor

Ixrec commented Feb 26, 2018

Note that "other RFCs" includes https://github.com/rust-lang/rfcs/blob/master/text/2033-experimental-coroutines.md, which basically is async/await syntax (albeit slightly less pretty since it's done with procedural macros for now). That seems like "language-level async IO" to me, even if it's not-so-secretly sugar over the futures library.

@leonerd

This comment has been minimized.

Show comment
Hide comment
@leonerd

leonerd Feb 26, 2018

That seems like "language-level async IO" to me, even if it's not-so-secretly sugar over the futures library.

That's OK. It's surface syntax sugar over futures in Perl as well. :) Probably true of many languages

leonerd commented Feb 26, 2018

That seems like "language-level async IO" to me, even if it's not-so-secretly sugar over the futures library.

That's OK. It's surface syntax sugar over futures in Perl as well. :) Probably true of many languages

@BatmanAoD

This comment has been minimized.

Show comment
Hide comment
@BatmanAoD

BatmanAoD Apr 25, 2018

@m13253 In case you hadn't heard, here are more up-to-date proposals:

@m13253 In case you hadn't heard, here are more up-to-date proposals:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment