Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark for xtra vs tokio mpsc channels #181

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

thomaseizinger
Copy link
Collaborator

@thomaseizinger thomaseizinger commented Aug 30, 2022

A first implementation of a benchmark. Here are some first results:

increment/xtra/100      time:   [940.73 us 969.06 us 998.71 us]
increment/xtra/1000     time:   [9.6542 ms 9.9258 ms 10.205 ms]
increment/xtra/10000    time:   [97.921 ms 100.49 ms 103.09 ms]
increment/mpsc/100      time:   [14.864 us 14.972 us 15.076 us]
increment/mpsc/1000     time:   [144.32 us 145.35 us 146.34 us]
increment/mpsc/10000    time:   [1.3135 ms 1.3333 ms 1.3532 ms]

Fixes #116.

@thomaseizinger
Copy link
Collaborator Author

I am guessing the major speed difference comes from the allocations. xtra performs at least two allocations as part of wrapping messages in its envelope, plus there might be allocations within the channel to store waiting senders / receivers.

@Restioson
Copy link
Owner

True - this is simply the cost you pay for dynamic message dispatch. I would wonder how it'd look if the Tokio example also dynamically dispatched message handlers

@thomaseizinger
Copy link
Collaborator Author

True - this is simply the cost you pay for dynamic message dispatch. I would wonder how it'd look if the Tokio example also dynamically dispatched message handlers

Would that be a fair comparison though? The actor as shown in the benchmark is how I'd implement it. With an enum to handle multiple cases.

@Restioson
Copy link
Owner

Would that be a fair comparison though? The actor as shown in the benchmark is how I'd implement it. With an enum to handle multiple cases.

It's comparing different things - in the dynamic dispatch case what we are really measuring is the channel performance of xtra's internal channel. Previously this could be extrapolated from benches of flume, but this is no longer the case

@thomaseizinger
Copy link
Collaborator Author

Would that be a fair comparison though? The actor as shown in the benchmark is how I'd implement it. With an enum to handle multiple cases.

It's comparing different things - in the dynamic dispatch case what we are really measuring is the channel performance of xtra's internal channel. Previously this could be extrapolated from benches of flume, but this is no longer the case

As a user though, I don't really care what xtra is using internally? I think xtra is quite appealing from an ergonomics perspective. The only thing that these benchmarks should show IMO is that they will not pay a massive price for these ergonomics. 10us per message is still pretty good and likely negligible compared to what the application will actually do.

At least in these benchmarks, the mpsc channel is ~70x faster but I think we are operating at a latency scale here where if that difference matters to you, you are probably going to write everything yourself anyway? I've never worked on applications that were performance critical on that scale so I don't actually know.

@Restioson
Copy link
Owner

Restioson commented Sep 1, 2022

As a user though, I don't really care what xtra is using internally? I think xtra is quite appealing from an ergonomics perspective. The only thing that these benchmarks should show IMO is that they will not pay a massive price for these ergonomics. 10us per message is still pretty good and likely negligible compared to what the application will actually do.

From a user perspective, it is not a useful measurement, I agree. However, it would still be useful to look at for us (the implementors) to see how fast xtra's channel is in comparison to tokio.

At least in these benchmarks, the mpsc channel is ~70x faster but I think we are operating at a latency scale here where if that difference matters to you, you are probably going to write everything yourself anyway? I've never worked on applications that were performance critical on that scale so I don't actually know.

Agreed. 9us for sending one message is negligible for most applications.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Benchmark against mpsc channels
2 participants