Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

async await support #403

Closed
GopherJ opened this issue Apr 15, 2020 · 18 comments
Closed

async await support #403

GopherJ opened this issue Apr 15, 2020 · 18 comments
Labels
Enhancement Intermediate Intermediate difficulty. Knowledge of Criterion.rs' internals is recommended.
Milestone

Comments

@GopherJ
Copy link

GopherJ commented Apr 15, 2020

#363 This is closed but I think it's important to re-open it because it seems necessary to reuse an async runtime instead of always doing async_std::task::block_on in b.iter

@bheisler
Copy link
Owner

Hey, thanks for the suggestion and thanks for trying Criterion.rs.

I'm still not really clear on what async/await support would look like. It would need to accept the executor from the caller, to be sure.

A proof-of-concept (perhaps using iter_custom) would be helpful.

@GopherJ
Copy link
Author

GopherJ commented Apr 25, 2020

hi I'm wondering if we can have something like:

b.iter_async(async { ....  })

@bheisler
Copy link
Owner

Maybe, if it can be implemented in a way that requires the caller to pass in an appropriate executor (Criterion.rs will not implement a futures executor) and can measure the performance of the function accurately.

This is not a priority for me, but pull requests would be welcome.

@bheisler bheisler added Enhancement Intermediate Intermediate difficulty. Knowledge of Criterion.rs' internals is recommended. labels Apr 25, 2020
@GopherJ
Copy link
Author

GopherJ commented Apr 25, 2020

@bheisler Yes maybe we can let users pass in an executor. I'm not sure on how it should be implemented yet.

@GopherJ
Copy link
Author

GopherJ commented May 10, 2020

See also: rust-lang/rust#71186

@zacps
Copy link

zacps commented Nov 7, 2020

There are generic executor traits, such as futures::task::Spawn. I don't think they're especially widely implemented at the moment, but it is more or less what you'd want to take from the caller.

@zacps
Copy link

zacps commented Nov 8, 2020

I'd propose the following API surface:

impl<M: Measurement> Criterion<M> {
    fn with_executor<E: futures::task::Spawn + 'static>(self, e: E) -> Self
}
impl<'a, M: Measurement> BenchmarkGroup<'a, M> {
    fn bench_async_function<ID: IntoBenchmarkId, F>(&mut self, id: ID, f: F) -> &mut Self
        where F: FnMut(&mut AsyncBencher<M>)
}
impl<'a, M: Measurement> AsyncBencher<'a, M> {
    pub fn iter<O, R>(&mut self, routine: R)
        where R: FnMut() -> std::future::Future<O>
    ...
}

@bheisler
Copy link
Owner

bheisler commented Nov 8, 2020

Well, for one thing you'll need to make all of that generic in the type of the executor. In order to implement the interface you have designed, you'd have to put the executor behind a trait object pointer, but dynamic calls would add overhead to the timing.

Speaking of which, how do you intend to do the timing? Timing each individual execution of the job and summing the times will be far too subject to timing jitter. You could submit a batch of jobs to the executor and wait for them all to complete, except that now you're timing the executor's spawn code more than the user's actual code, and you have still more overhead from signaling that all of the jobs are done.

I'd love for someone to prove me wrong but I don't think that there is any way to measure this accurately. That's why I've always said - if you want to benchmark async code, make it a thin async shell that calls a synchronous core and benchmark that.

@zacps
Copy link

zacps commented Nov 9, 2020

Of course there will be some unavoidable overhead in the execution. Could it not be mitigated by using long running iterations?

@bheisler
Copy link
Owner

bheisler commented Nov 9, 2020

I realized late last night that I've completely misunderstood how this should work. I thought you'd have to spawn each individual iteration, but I realized that the whole measurement loop can just be spawned as one future that awaits other futures in a loop. My bad. I have a design in mind now, but I haven't had much time for working on Criterion lately so I can't make promises about when it will be available.

@floric
Copy link

floric commented Dec 19, 2020

Hey :) Async support would be awesome. In the meantime I have a small example for Tokio which works fine for me. Maybe somebody can save some time and use it as well. :)

use tokio::{runtime::Builder, task};
use <any-blockon-lib>::block:on; (eg. futures::executor::block_on);
fn criterion_benchmark(c: &mut Criterion) {
    let mut rt = Builder::new()
        .threaded_scheduler()
        .enable_all()
        .build()
        .expect("Creating runtime failed");

    rt.block_on(async {
        let local = task::LocalSet::new();

        c.bench_function("MyTest", |b| {
            b.iter_custom(|iters| {
                let t = local.run_until(async move {
                    let start = Instant::now();
                    for _i in 0..iters {
                        my_async_function().await;
                    }
                    start.elapsed()
                });

                block_on(t)
            });
        });
    });
}

@LEXUGE
Copy link

LEXUGE commented Jan 16, 2021

    tokio::runtime::Builder::new_multi_thread()
        .enable_all()
        .build()
        .unwrap()
        .block_on(async {
            let socket = UdpSocket::bind(&"127.0.0.1:53533").await.unwrap();
            let server = Server::new(socket, vec![0; 1024], None);
            tokio::spawn(server.run(DUMMY_MSG.clone()));

            // Do something

            c.bench_function("non_cache_resolve", |b| {
                b.iter(|| async {
                    assert_eq!(
                        router.resolve(None, QUERY.clone()).await.unwrap().answers(),
                        DUMMY_MSG.answers()
                    );
                })
            });
        });

Things like this works for me.

@zbraniecki
Copy link

I don't think it works actually. b.iter is not waiting for the async block to finish when I try it.

What I use as a workaround is:

        let rt = tokio::runtime::Runtime::new().unwrap();

        group.bench_function(format!("{}/async", name), |b| {
            b.iter(|| {
                rt.block_on(async {
                    let _ = operation.await;
                });
            })
        });

@LEXUGE
Copy link

LEXUGE commented Jan 17, 2021

But block_on blocks the entire runtime, rendering other spawned task frozen. This then doesn’t work for me.

bheisler added a commit that referenced this issue Jan 20, 2021
@bheisler
Copy link
Owner

bheisler commented Jan 20, 2021

OK, folks. Async/await support is implemented. It does exactly the same thing as regular synchronous benchmarks, but in a future. You can plug in your own choice of executor; Tokio, Smol, Async-std and Futures are supported out of the box. I'd appreciate it if you folks could give it a try against the version in the git repository and let me know if you run into trouble so I can fix it before it's released.

Edit: Documentation (since the Travis build that normally generates the user guide is broken): https://github.com/bheisler/criterion.rs/blob/master/book/src/user_guide/benchmarking_async.md

@zbraniecki
Copy link

Works great! I was able to test it as:

            use futures::stream::StreamExt;

            let rt = tokio::runtime::Runtime::new().unwrap();

            group.bench_function(&format!("{}/async/first_bundle", scenario.name), |b| {
                b.to_async(&rt).iter(|| async {
                    let reg = fetcher.get_registry(&scenario);

                    let mut bundles = reg.generate_bundles(locales.to_vec(), res_ids.to_vec());
                    assert!(bundles.next().await.is_some());
                });
            });

@LEXUGE
Copy link

LEXUGE commented Jan 20, 2021

It works compassd/dcompass@9b379d0

@bheisler bheisler added this to the 0.3.4 milestone Jan 23, 2021
@bheisler
Copy link
Owner

Alright, this is ready to go in the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement Intermediate Intermediate difficulty. Knowledge of Criterion.rs' internals is recommended.
Projects
None yet
Development

No branches or pull requests

6 participants