-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression between 2.3.0 and 2.5.0 - crossbeam-channel? #21
Comments
That's quite possibly the root cause. Can't think of anything else. You could confirm by just running your test with pre-crossbeam version, or just revert that one change locally.
|
If I was to make an educated guess, then crossbeam-channel would be probably faster when there's some contention, while there would probably be little to no difference if it is mostly idle. However, it has kind of a GC thing that needs to run from time to time, so it could bring some tail latencies. Nevertheless, that 2-3ms looks like a really long time. |
Thanks for your feedback! I will do some benchmarks comparing the tail latencies of crossbeam vs. mpsc. I will report the results back here. |
With @zazabe with did some benchmarks, comparing the performance of the
In this test we were using 4 CPU cores, and put a base load of 30% on all of them during the test ( It seems that the mpsc tail latency for |
@vorner in #23 (comment) you mention crossbeam's GC. Could you share a link to crossbeam's code where this is happening? |
I think it would be this place: https://github.com/crossbeam-rs/crossbeam/blob/master/crossbeam-epoch/src/internal.rs#L278. |
I can't find any reference from "crossbeam-channel" to "crossbeam-epoch". The Cargo.toml also doesn't list it as a dependency. Are you sure it's used? |
🤔 I'd have sworn it was used some years ago and haven't checked since then, but possibly it is not used any more. |
One theory we came up with is that this is happening due to the loops in the channel's |
I've observed a performance regression in an application after upgrading slog-async from 2.3.0 to 2.5.0. I guess it's due to the switch from mpsc-channel to crossbeam-channel.
The application is performing ~500 operations per second, each usually taking less than 0.5ms (99th percentile). After upgrading to slog-async 2.5.0 the average duration didn't change, but there are clearly more outliers. The 99th percentile is now 2-3ms.
I've reproduced the issue on 4 cloud VMs, 2x slog-async 2.3.0 and 2x slog-async 2.5.0, otherwise identical.
I have two theories:
Notes about the application setup:
Do you have any thoughts on what could be causing this?
I'm mainly opening this ticket in case somebody else is experiencing similar issues. Not expecting any action, feel free to close the ticket again :)
The text was updated successfully, but these errors were encountered: