-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipeline redis operations to decrease overhead of repeated calls #126
Conversation
please let me know if there are any changes that would help to get this PR merged. |
sorry let me resolve the failing tests 1st |
tests should be passing now |
This looks great! Reducing network trips makes a ton of sense. I think it would be great to have some benchmarks on how much of a performance improvement this is. Here is an example of some benchmarks i took for saving around 3K files: https://gist.github.com/kbaum/8dd2913db60dd99734aa875e5c1d9acc |
yes would need to get some benchmarks before getting this in, but as mention on the issue, I think a better approach is to avoid the N+1 calls entirely... The issue is that the data format for tracepoint and coverage are different so having a single get and put works for coverage but wouldn't as cleanly for tracepoint... I am planning to drop tracepoint support in Coverband 3, but not sure how to handle and pick what to pull in before then as all these other workarounds increase complexity and likely aren't necessary in the new path. |
ok with some of the other things folks brought up I am going to pause on 3.0 for a bit and see about pulling this in for a 2.0.3 release, but first I am going to try to get a benchmark incorporated that can show the value... basically we have |
OK I added the benchmark On master:
on this PR
While this is faster, it isn't a very significant speedup at least with a local redis, when reporting to a redis across network, I would expect more of an impact... Either way, I will pull this in and see about the memory store as well, as it is a few weeks out for coverband 3 I believe given some of my other workload. |
This is great that you added benchmarks. We might want to try this out on heroku using heroku redis. I’ve seen it’s definitely much slower in this case. Another concern. I’m not sure how redis pipeline works but redis operates in a single thread. Does a long pipleline with thousands of writes lock up redis for other redis operations? This could be an issue if people are using the same redis instance for coverband and sidekiq for example. |
Yeah, I will look at improving the benchmark suite so you can run it against localhost or a remote Redis. I don't believe pipeline will lock up the Redis but will avoid much network traffic as the app won't wait for all the responses. See details here: https://redis.io/topics/pipelining @kbaum in case you think I am missing something. |
Re: pipelining. Reading those docs, it sounds like redis puts commands on a queue which leads me to believe other commands from other clients are being permitted so we are probably good here. |
yes that is my understanding as well |
Thanks for taking a look at this and merging so quickly! |
sure @Kallin I believe I will get the updated release of 2.0.3 out today or tomorrow. That includes this fix and some others that should be helpful. |
made to address performance concerns raised here: #124 . Redis operations are pipelined (batched) so that only two calls total are made when saving report, instead of 2 calls per file in report.