Build a dedicated async runtime #187
Replies: 5 comments 7 replies
-
We don't need to start from scratch actually. We can start with building our own runtime abstraction (e.g. |
Beta Was this translation helpful? Give feedback.
-
PS: it seems the current existing rust io-uring framework is advise users to use the thread-per-core model.
they return future with maybe we only have the chance to decide whether or not to use "thread-per-core model"(TPC) in the early phase and could be hard to change after that.... and for engula, it has some different character from other project that maybe we need take care:
|
Beta Was this translation helpful? Give feedback.
-
@ihciah Your monoio looks very interesting. So let me venture to ask, are you interested in helping us with the async runtime? |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
I'm trying to test interoperation between sync and async blocks, and the conclusion could be that:
Interoperation between sync and async
#![feature(associated_type_bounds)]
use std::fmt::Debug;
use std::future::Future;
async fn another() -> u64 {
21
}
fn function() -> impl Future<Output: Debug> {
// `async fn` desugar to a state machine return a Future
another()
}
fn main() -> anyhow::Result<()> {
let rt = tokio::runtime::Builder::new_multi_thread().build()?;
// you can call `.await` on any function as long as they return Future
println!("{:?}", rt.block_on(async { function().await }));
Ok(())
} actually, tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
// code in `async fn main`
}) Coexisting of async runtimeBecause of the underneath above, it's possible that we run the storage engine in one async runtime, while when talking via gRPC by tonic, schedule the task on tokio's runtime. However, it's very subtle to be aware of what tasks run on which runtime and highly possibly require to do bookkeeping on those runtimes. #![feature(associated_type_bounds)]
use std::fmt::Debug;
use std::future::Future;
struct RuntimeBookKeeper {
tokio: tokio::runtime::Runtime,
}
impl RuntimeBookKeeper {
fn block_on_tokio<F, T>(&self, future: F) -> T where F: Future<Output = T> {
self.tokio.block_on(future)
}
fn block_on_async<F, T>(&self, future: F) -> T where F: Future<Output = T> {
async_std::task::block_on(future)
}
}
async fn another() -> u64 {
21
}
fn function() -> impl Future<Output: Debug> {
another()
}
fn main() -> anyhow::Result<()> {
let bk = RuntimeBookKeeper { tokio: tokio::runtime::Builder::new_multi_thread().build()? };
println!("{:?}", bk.block_on_async(async {
bk.block_on_tokio(async { function().await })
}));
Ok(())
} If the dependency library relies on tokio heavily, said directly talk to the runtime of "current thread", it may fail if the current context doesn't provide the runtime. |
Beta Was this translation helpful? Give feedback.
-
I think Rust async runtime is significant for the long-term success of Engula. However, to be honest, I am not satisfied with the async ecosystem so far (you can check some links below). I know it will be better, but there is a lot of uncertainty here that I don't want to bet on an existing runtime.
On the other hand, I think a purpose-built async runtime for Engula is beneficial:
IMO, building an async runtime with io-uring is the way we should go. We don't need to care too much about supporting different operating systems so far. There is a tokio-uring project, but it's still too young to be useful. Monoio looks more interesting in that term.
However, building our own runtime also means that we have to abandon the existing ecosystem. For example, things like Hyper/Tonic will not work. But I still think this is the right way to go. It may slow us down at the beginning, but it will be a long-term win. And it's the only way for Engula to be the top one. We still have the chance to do it at this early stage. Once we start adding a lot of features and get pressure from users, we can never get back.
Ref: #57 (comment)
References:
Beta Was this translation helpful? Give feedback.
All reactions