Conversation
The DC-aware load balancing policy depends on an internal host map to determine host distance. The problem was the host distance was being checked to determine whether a node should be added or marked as up. This fix always adds and marks hosts up even if they're ignored. The distance is only used to determine if a host should have a connection pool.
Contributor
Author
|
This is only |
mikefero
approved these changes
Oct 4, 2018
Contributor
mikefero
left a comment
There was a problem hiding this comment.
This looks great ... just a couple of nits; feel free to merge at will.
Seems like TravisCI fails from time to time; wonder if we should increase timeouts and things due to the fact that this machine is slower than a normal Mac. That can be done separate from this ticket.
| void Cluster::init(AddressGenerator& generator, | ||
| ClientConnectionFactory& factory, | ||
| size_t num_nodes) { | ||
| size_t num_nodes_dc1, size_t num_nodes_dc2) { |
Contributor
There was a problem hiding this comment.
nit: size_t num_nodes_dc1, size_t num_nodes_dc2) { =>
size_t num_nodes_dc1,
size_t num_nodes_dc2) {
| void init(AddressGenerator& generator, | ||
| ClientConnectionFactory& factory, | ||
| size_t num_nodes); | ||
| size_t num_nodes_dc1, size_t num_nodes_dc2); |
| ASSERT_TRUE(up_future->wait_for(WAIT_FOR_TIME)); | ||
| EXPECT_EQ(remote_address, listener->address()); | ||
|
|
||
| cluster.stop(1); // Stop local node to very that remote host is tried for reconnection. |
| bind_callback(on_connection_connected, connect_future.get()))); | ||
|
|
||
| ClusterSettings settings; | ||
| settings.load_balancing_policy.reset(Memory::allocate<DCAwarePolicy>("dc3", 0, false)); // Invalid DC and no using remote hosts |
Contributor
There was a problem hiding this comment.
I know dc3 is not valid, but dc2 is also invalid. What do you think about changing this to just invalid_dc?
no using => not using
| ScopedPtr<QueryPlan> query_plan(default_policy->new_query_plan("", NULL, NULL)); | ||
| if (!query_plan->compute_next()) { // No hosts in the query plan | ||
| const char* message; | ||
| if (dynamic_cast<DCAwarePolicy::DCAwareQueryPlan*>(query_plan.get()) != NULL) { // Check if DC-aware |
Contributor
There was a problem hiding this comment.
Nice ... thanks for taking care of that TODO
added 2 commits
October 4, 2018 09:27
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The DC-aware load balancing policy depends on an internal host map to determine host distance. The problem was the host distance was being checked to determine whether a node should be added or marked as up. This fix always adds and marks hosts up even if they're ignored. The distance is only used to determine if a host should have a connection pool.