New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove cluster.routing.allocation.balance.primary
#9159
Conversation
@@ -69,28 +67,25 @@ | |||
public static final String SETTING_THRESHOLD = "cluster.routing.allocation.balance.threshold"; | |||
public static final String SETTING_INDEX_BALANCE_FACTOR = "cluster.routing.allocation.balance.index"; | |||
public static final String SETTING_SHARD_BALANCE_FACTOR = "cluster.routing.allocation.balance.shard"; | |||
public static final String SETTING_PRIMARY_BALANCE_FACTOR = "cluster.routing.allocation.balance.primary"; | |||
|
|||
private static final float DEFAULT_INDEX_BALANCE_FACTOR = 0.5f; | |||
private static final float DEFAULT_SHARD_BALANCE_FACTOR = 0.45f; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to bump the shard balance factor back up to 0.5 to keep the default weight sum at 1.0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably a good call. I try to always keep them totally 1.0 for my sanity.
LGTM, left one minor comment |
@dakrone pushed a new commit |
LGTM |
The `cluster.routing.allocation.balance.primary` setting has caused a lot of confusion in the past while it has very little benefit form a shard allocatioon point of view. Users tend to modify this value to evently distribute primaries across the nodes which is dangerous since a prmiary flag on it's own can trigger relocations. The primary flag for a shard is should not have any impact on cluster performance unless the high level feature suffereing from primary hotspots is buggy. Yet, this setting was intended to be a tie-breaker which is not necessary anymore since the algorithm is deterministic. This commit removes this setting entriely.
85c003d
to
236e249
Compare
cluster.routing.allocation.balance.primary
How can we avoid all primary shards to not go to same node? In heavy write use cases, all shards on same node can cause issue for that node. Any alternative? |
@gkukal |
@clintongormley |
@gkukal @clintongormley I get a solution. We can cancel an replica shard, and move the primary shard to the other node. Like this: |
The
cluster.routing.allocation.balance.primary
setting has causeda lot of confusion in the past while it has very little benefit form a
shard allocatioon point of view. Users tend to modify this value to
evently distribute primaries across the nodes which is dangerous since
a prmiary flag on it's own can trigger relocations. The primary flag for a shard
is should not have any impact on cluster performance unless the high level feature
suffereing from primary hotspots is buggy. Yet, this setting was intended to be a
tie-breaker which is not necessary anymore since the algorithm is deterministic.
This commit removes this setting entriely.