Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please include async-channel in benchmarks #58

Open
joshtriplett opened this issue Nov 4, 2020 · 4 comments
Open

Please include async-channel in benchmarks #58

joshtriplett opened this issue Nov 4, 2020 · 4 comments

Comments

@joshtriplett
Copy link

Please consider adding the async-channel crate to the benchmarks. It's part of the async-std and smol stacks, and I would like to see how it compares.

@Restioson
Copy link
Collaborator

Restioson commented Nov 4, 2020

Currently, there are no benchmarks which are asynchronous, as we're not really sure how async benches should be measured. Do you have any suggestions to do so? The other option would be to wrap each call to async-channel in block_on, but this has overhead.

@joshtriplett
Copy link
Author

joshtriplett commented Nov 4, 2020

@Restioson I'd suggest two benchmarks: Thread-per-CPU and Single-threaded. (There are other possible configurations, but those will be the most common.)

The actual benchmarks should be all the same things you currently benchmark, with the producers and consumers sitting in separate tasks, and the only question is whether those tasks run concurrently or not.

You already have flume synchronous benchmarks; you could add "flume async thread-per-CPU", "flume async single-threaded", "async-channel thread-per-CPU", and "async-channel single-threaded".

(That'll also allow comparing flume sync and async performance.)

@Restioson
Copy link
Collaborator

Restioson commented Nov 5, 2020

How should the performance of the tasks be measured? We could block_on them and throw them in a criterion b.iter but this could also just be measuring general futures overhead

@joshtriplett
Copy link
Author

joshtriplett commented Nov 5, 2020

You could call .detach() on them and use them exactly like you're currently using threads, just substituting send_async().await and recv_async().await in place of the send and recv calls, and then see how long the overall operation takes to complete just as the current benchmarks do.

For thread-per-CPU, something like smol::spawn with SMOL_THREADS=$(nproc) would work. For the single-threaded case, you could create a LocalExecutor and run tasks on that. (Don't just run a multi-threaded executor with one thread, run a specifically single-threaded executor.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants