New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data aggregation C++ API #248
Data aggregation C++ API #248
Conversation
Codecov Report
@@ Coverage Diff @@
## develop #248 +/- ##
===========================================
- Coverage 80.85% 80.15% -0.70%
===========================================
Files 51 54 +3
Lines 3008 3326 +318
===========================================
+ Hits 2432 2666 +234
- Misses 576 660 +84
|
get_list_length() returns type int (not size_t). This seems more consistent.
c29b2f8
to
ac17d2c
Compare
retrieval from aggregation lists. Tests now also check that dataset names match.
error handling is easier to understand. This required some changes to PipelineReply ojbect and an additional vector of command pointers in the unordered pipeline execution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice work here @mellis13. You and @billschereriii already anticipated/addressed the things that I thought might have been issues. The main change that I'm suggesting is to add/expose pop
functionality to the list in the event that you have multiple consumers of the aggregated list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes all look good to me
This is a draft PR for the data aggregation API in C++. A draft PR has been put up so parallel work can be done multi-threading the dataset retrieval from cluster shards. CI tests will likely fail until all functionality is done.
The following items still need to be addressed before final review and merge of this PR:
rename_list()
rename_list()
copy_list()
copy_list()
Redis::run_via_unordered_pipelines()
. This was implemented inRedisCluster
and theRedis
version of this function will be extremely easy since all keys are colocated on one shard.Client::_get_dataset_list_range
to functionalize the code for reuse in other areas (e.g.Client.get_dataset()
RedisCluster::run_via_unordered_pipelines()
). This will be evaluated in a separate ticket, and this PR should not be merged before performance data is available from that.