New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
async await support #403
Comments
Hey, thanks for the suggestion and thanks for trying Criterion.rs. I'm still not really clear on what async/await support would look like. It would need to accept the executor from the caller, to be sure. A proof-of-concept (perhaps using |
hi I'm wondering if we can have something like:
|
Maybe, if it can be implemented in a way that requires the caller to pass in an appropriate executor (Criterion.rs will not implement a futures executor) and can measure the performance of the function accurately. This is not a priority for me, but pull requests would be welcome. |
@bheisler Yes maybe we can let users pass in an executor. I'm not sure on how it should be implemented yet. |
See also: rust-lang/rust#71186 |
There are generic executor traits, such as |
I'd propose the following API surface: impl<M: Measurement> Criterion<M> {
fn with_executor<E: futures::task::Spawn + 'static>(self, e: E) -> Self
} impl<'a, M: Measurement> BenchmarkGroup<'a, M> {
fn bench_async_function<ID: IntoBenchmarkId, F>(&mut self, id: ID, f: F) -> &mut Self
where F: FnMut(&mut AsyncBencher<M>)
} impl<'a, M: Measurement> AsyncBencher<'a, M> {
pub fn iter<O, R>(&mut self, routine: R)
where R: FnMut() -> std::future::Future<O>
...
} |
Well, for one thing you'll need to make all of that generic in the type of the executor. In order to implement the interface you have designed, you'd have to put the executor behind a trait object pointer, but dynamic calls would add overhead to the timing. Speaking of which, how do you intend to do the timing? Timing each individual execution of the job and summing the times will be far too subject to timing jitter. You could submit a batch of jobs to the executor and wait for them all to complete, except that now you're timing the executor's spawn code more than the user's actual code, and you have still more overhead from signaling that all of the jobs are done. I'd love for someone to prove me wrong but I don't think that there is any way to measure this accurately. That's why I've always said - if you want to benchmark async code, make it a thin async shell that calls a synchronous core and benchmark that. |
Of course there will be some unavoidable overhead in the execution. Could it not be mitigated by using long running iterations? |
I realized late last night that I've completely misunderstood how this should work. I thought you'd have to spawn each individual iteration, but I realized that the whole measurement loop can just be spawned as one future that awaits other futures in a loop. My bad. I have a design in mind now, but I haven't had much time for working on Criterion lately so I can't make promises about when it will be available. |
Hey :) Async support would be awesome. In the meantime I have a small example for Tokio which works fine for me. Maybe somebody can save some time and use it as well. :)
|
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
let socket = UdpSocket::bind(&"127.0.0.1:53533").await.unwrap();
let server = Server::new(socket, vec![0; 1024], None);
tokio::spawn(server.run(DUMMY_MSG.clone()));
// Do something
c.bench_function("non_cache_resolve", |b| {
b.iter(|| async {
assert_eq!(
router.resolve(None, QUERY.clone()).await.unwrap().answers(),
DUMMY_MSG.answers()
);
})
});
}); Things like this works for me. |
I don't think it works actually. What I use as a workaround is: let rt = tokio::runtime::Runtime::new().unwrap();
group.bench_function(format!("{}/async", name), |b| {
b.iter(|| {
rt.block_on(async {
let _ = operation.await;
});
})
}); |
But block_on blocks the entire runtime, rendering other spawned task frozen. This then doesn’t work for me. |
OK, folks. Async/await support is implemented. It does exactly the same thing as regular synchronous benchmarks, but in a future. You can plug in your own choice of executor; Tokio, Smol, Async-std and Futures are supported out of the box. I'd appreciate it if you folks could give it a try against the version in the git repository and let me know if you run into trouble so I can fix it before it's released. Edit: Documentation (since the Travis build that normally generates the user guide is broken): https://github.com/bheisler/criterion.rs/blob/master/book/src/user_guide/benchmarking_async.md |
Works great! I was able to test it as: use futures::stream::StreamExt;
let rt = tokio::runtime::Runtime::new().unwrap();
group.bench_function(&format!("{}/async/first_bundle", scenario.name), |b| {
b.to_async(&rt).iter(|| async {
let reg = fetcher.get_registry(&scenario);
let mut bundles = reg.generate_bundles(locales.to_vec(), res_ids.to_vec());
assert!(bundles.next().await.is_some());
});
}); |
It works compassd/dcompass@9b379d0 |
Alright, this is ready to go in the next release. |
#363 This is closed but I think it's important to re-open it because it seems necessary to reuse an async runtime instead of always doing
async_std::task::block_on
inb.iter
The text was updated successfully, but these errors were encountered: