-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use an optimized crc32 library which is faster #527
Conversation
We may also want to check what architectures this will run on, because it uses CPU instructions that may not be available everywhere, and I don't know if it does fallbacks. |
On a quick glance it uses |
OK, let's run some benchmarks! |
My benchmark shows no difference; possibly vagrant does not support the necessary instructions. Since golang profiling is broken on native MacOS, we may have to run something on one of our servers. |
Even on a bare-metal server whose |
Welp, found a bug in our code now when benchmarking. Another PR incoming. |
Short version is: this basically removes CRC32 from the profile entirely, it drops from 15-20% of the CPU time in my test to a statistically insignificant number of samples. Sold. Also, the upstream author has submitted this to Go, so there's a very good chance Go 1.6 will have this optimization in the stdlib. |
Use an optimized crc32 library which is faster
👏 |
I discovered a "send on closed channel" panic in the consumer while testing #527 which I was finally able to track down. If a partition takes a long time to drain to the user, then the responseFeeder reclaims its ownership token from the broker so that the broker doesn't block its other partitions. However, if the user closes the PartitionConsumer (closing the dying channel) then the brokerConsumer will unconditionally return the ownership token to the dispatcher even if the responseFeeder is holding it. This results in two ownership tokens for the same partition (one in the feeder, one in the dispatcher) which leads to all sorts of subtle brokeness. It manifested in at least two different "send on closed channel" backtraces depending on the exact timing, and possibly more.
I discovered a "send on closed channel" panic in the consumer while testing #527 which I was finally able to track down. If a partition takes a long time to drain to the user, then the responseFeeder reclaims its ownership token from the broker so that the broker doesn't block its other partitions. However, if the user closes the PartitionConsumer (closing the dying channel) then the brokerConsumer will unconditionally return the ownership token to the dispatcher even if the responseFeeder is holding it. This results in two ownership tokens for the same partition (one in the feeder, one in the dispatcher) which leads to all sorts of subtle brokeness. It manifested in at least two different "send on closed channel" backtraces depending on the exact timing, and possibly more. To fix, move the check on `child.dying` to the top of the `subscriptionConsumer` loop where we are guaranteed to have the ownership token. Combine that check with the 'new subcriptions' check into an `updateSubscriptions` helper method. The diff is huge because this lets us drop an indentation level in `handleResponses`, I suggest reviewing with `w=1` to ignore whitespace.
Needs testing to ensure that this is actually faster (and is still correct), but may solve #255.