Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Optimize IO's encoding #90
Fixes #89 and improves performance.
This implementation is inspired by the internal encoding of the Monix
Benchmarking (last update: 2017-11-29)
The PR includes a JMH setup with
In order to run the benchmarks one needs to execute the script:
The results will be dumped in
This measures a plain tail-recursive
Over twice the throughput.
This one measures a non-tail-recursive
After PR changes:
The differences are dramatic due to memory usage.
This one measures the performance of
After the PR changes:
The differences are dramatic for when errors get handled.
This one measure the performance of
After PR changes:
More optimizations are possible, but at this point this provides a good baseline — other micro-optimizations can come in separate PRs, along with the proof that they work.
@@ Coverage Diff @@ ## master #90 +/- ## ========================================== + Coverage 85.93% 86.77% +0.83% ========================================== Files 19 20 +1 Lines 384 378 -6 Branches 21 27 +6 ========================================== - Hits 330 328 -2 + Misses 54 50 -4
@pchlupacek the new internal encoding is optimized for flatMap-ing over both successful values and errors, however we are not exposing it in the API.
@pchlupacek note that these changes makes
@alexandru This looks very impressive at first glance. Thanks for taking the time to do this! I won't have a chance to review it potentially for a few days. @mpilquist has already given his stamp of approval, so if you guys want to move forward, feel free to do so. :-) Otherwise, I'll get to it asap.
@mpilquist thanks for the review and the merge.
@djspiewak when you have the time, please publish a hash version for testing purposes.
This might not be the last PR for performance optimizations. I'm tormenting myself with some profiling tools from Intel with an UI made in 1994 and I'm doing experiments, but as I said it would be better to introduce further optimizations piecemeal with some proof that they work.