Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove cluster.routing.allocation.balance.primary #9159

Merged
merged 1 commit into from Jan 6, 2015

Conversation

s1monw
Copy link
Contributor

@s1monw s1monw commented Jan 6, 2015

The cluster.routing.allocation.balance.primary setting has caused
a lot of confusion in the past while it has very little benefit form a
shard allocatioon point of view. Users tend to modify this value to
evently distribute primaries across the nodes which is dangerous since
a prmiary flag on it's own can trigger relocations. The primary flag for a shard
is should not have any impact on cluster performance unless the high level feature
suffereing from primary hotspots is buggy. Yet, this setting was intended to be a
tie-breaker which is not necessary anymore since the algorithm is deterministic.

This commit removes this setting entriely.

@@ -69,28 +67,25 @@
public static final String SETTING_THRESHOLD = "cluster.routing.allocation.balance.threshold";
public static final String SETTING_INDEX_BALANCE_FACTOR = "cluster.routing.allocation.balance.index";
public static final String SETTING_SHARD_BALANCE_FACTOR = "cluster.routing.allocation.balance.shard";
public static final String SETTING_PRIMARY_BALANCE_FACTOR = "cluster.routing.allocation.balance.primary";

private static final float DEFAULT_INDEX_BALANCE_FACTOR = 0.5f;
private static final float DEFAULT_SHARD_BALANCE_FACTOR = 0.45f;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to bump the shard balance factor back up to 0.5 to keep the default weight sum at 1.0?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably a good call. I try to always keep them totally 1.0 for my sanity.

@dakrone
Copy link
Member

dakrone commented Jan 6, 2015

LGTM, left one minor comment

@s1monw s1monw removed the v1.5.0 label Jan 6, 2015
@s1monw
Copy link
Contributor Author

s1monw commented Jan 6, 2015

@dakrone pushed a new commit

@dakrone
Copy link
Member

dakrone commented Jan 6, 2015

LGTM

The `cluster.routing.allocation.balance.primary` setting has caused
a lot of confusion in the past while it has very little benefit form a
shard allocatioon point of view. Users tend to modify this value to
evently distribute primaries across the nodes which is dangerous since
a prmiary flag on it's own can trigger relocations. The primary flag for a shard
is should not have any impact on cluster performance unless the high level feature
suffereing from primary hotspots is buggy. Yet, this setting was intended to be a
tie-breaker which is not necessary anymore since the algorithm is deterministic.

This commit removes this setting entriely.
@s1monw s1monw merged commit 236e249 into elastic:master Jan 6, 2015
@clintongormley clintongormley changed the title [ALLOCATION] Remove primary balance factor Remove primary balance factor Jun 8, 2015
@clintongormley clintongormley added :Core/Infra/Settings Settings infrastructure and APIs and removed :Allocation labels Jun 8, 2015
@clintongormley clintongormley changed the title Remove primary balance factor Remove cluster.routing.allocation.balance.primary Jun 8, 2015
@gkukal
Copy link

gkukal commented Jan 14, 2016

How can we avoid all primary shards to not go to same node? In heavy write use cases, all shards on same node can cause issue for that node. Any alternative?

@clintongormley
Copy link

@gkukal primaries and replicas do the same amount of work, so it doesn't matter if one node holds more primaries than another. The only exception to this rule is if you make heavy use of the update API. (see #8369)

@chaitd
Copy link

chaitd commented Apr 2, 2019

@gkukal
Have you resolved the problem! I have the problem too.

@chaitd
Copy link

chaitd commented Apr 2, 2019

@gkukal primaries and replicas do the same amount of work, so it doesn't matter if one node holds more primaries than another. The only exception to this rule is if you make heavy use of the update API. (see #8369)

@clintongormley
If I set the preference param, primaries and replicas do NOT do the same amount of work, and there is a problem that one node becomes hotspot.
Could u tell me how to solve it?

@chaitd
Copy link

chaitd commented Apr 2, 2019

@gkukal @clintongormley I get a solution. We can cancel an replica shard, and move the primary shard to the other node. Like this:
{ "commands": [{ "cancel": { "index": "indexName", "shard": 1, "node": "node2" } }, { "move": { "index": "indexName", "shard": 1, "from_node": "node1", "to_node": "node2" } }] }
And I test it, the es responses quickly and the result is successful.
Do I use the proper way?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants