Reduce address comparisons for network topology replica calculation#532
Closed
Reduce address comparisons for network topology replica calculation#532
Conversation
added 2 commits
August 8, 2022 09:56
This uses a `DenseHashSet` to keep prevent duplicate replicas instead of doing a linear scan through the existing replicas. I'm seeing around a 4.5x speed up for larger replication factors (rf = 54).
mpenick
commented
Aug 15, 2022
| size_t replication_factor = 3; | ||
| size_t total_replicas = std::min(num_hosts, replication_factor) * num_dcs; | ||
| size_t replication_factor = 54; | ||
| size_t total_replicas = replication_factor; |
Contributor
Author
There was a problem hiding this comment.
This should likely be reverted. This is a pathological use case though.
mpenick
commented
Aug 15, 2022
|
|
||
| static size_t size_of(const String& value) { return value.size(); } | ||
|
|
||
| static size_t size_of(const Address& value) { char buf[16]; return value.to_inet(buf); } |
Contributor
Author
There was a problem hiding this comment.
These changes fix test warnings.
zakalibit
reviewed
Aug 15, 2022
src/token_map_impl.hpp
Outdated
| skipped_endpoints_this_dc.push_back(curr_token_it); | ||
| } else { | ||
| if (add_replica(replicas, Host::Ptr(host))) { | ||
| if (replicas_set.insert(host).second) { |
Contributor
There was a problem hiding this comment.
why not just embed the changes in the add_replica()?
another option is to try keep replicas sorted and use binary search? i.e. use
std::sort(), or even std::stable_sort() that should be faster for sorted or, almost sorted data, then just use std::lower_bound() with custom Comparator to check if the host needs to be added.
Contributor
Author
There was a problem hiding this comment.
The token order of replicas matters.
Contributor
There was a problem hiding this comment.
still could embed the logic in to add_replica()
mpenick
commented
Aug 15, 2022
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This uses a
DenseHashSetto keep prevent duplicate replicas instead of doing a linear scan through the existing replicas. I'm seeing around a 4.5x speed up for larger replication factors (rf = 54).