Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Take Shard data path into account in DiskThresholdDecider #13195

Merged
merged 1 commit into from
Aug 31, 2015

Conversation

s1monw
Copy link
Contributor

@s1monw s1monw commented Aug 29, 2015

The path that a shard is allocated on is not taken into account when
we decide to move a shard away from a node because it passed a watermark.
Even worse we potentially moved away (relocated) a shard that was not even
allocated on that disk but on another on the node in question. This commit
adds a ShardRouting -> dataPath mapping to ClusterInfo that allows to identify
on which disk the shards are allocated on.

Relates to #13106

@@ -360,6 +365,7 @@ public void onResponse(IndicesStatsResponse indicesStatsResponse) {
newShardSizes.put(sid, size);
}
shardSizes = Collections.unmodifiableMap(newShardSizes);
shardRoutingToDataPath = newShardRoutingToDataPath;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to wrap this one in Collections.unmodifiableMap(...) as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah sure good catch!

@dakrone
Copy link
Member

dakrone commented Aug 29, 2015

Left just a few comments

@s1monw
Copy link
Contributor Author

s1monw commented Aug 29, 2015

@dakrone pushed a new commit

@dakrone
Copy link
Member

dakrone commented Aug 29, 2015

LGTM

@s1monw s1monw force-pushed the shard_path_allocation_decisions branch from 92a20db to 1025e04 Compare August 29, 2015 21:11
@s1monw
Copy link
Contributor Author

s1monw commented Aug 29, 2015

@dakrone thx I will give others a chance to look at it and push on monday.

@s1monw s1monw force-pushed the shard_path_allocation_decisions branch from 1025e04 to 8e0357c Compare August 31, 2015 07:46
@bleskes
Copy link
Contributor

bleskes commented Aug 31, 2015

LGTM2 . I do wonder (unrelated to this change), if things will be simpler if we keep the disk usage per path on the cluster info (rather than calc the least free space path) and make the can remain logic check each shard w.r.t it's own path.

The path that a shard is allocated on is not taken into account when
we decide to move a shard away from a node because it passed a watermark.
Even worse we potentially moved away (relocated) a shard that was not even
allocated on that disk but on another on the node in question. This commit
adds a ShardRouting -> dataPath mapping to ClusterInfo that allows to identify
on which disk the shards are allocated on.

Relates to elastic#13106
@s1monw s1monw force-pushed the shard_path_allocation_decisions branch from 8e0357c to a17d750 Compare August 31, 2015 08:41
@s1monw s1monw merged commit a17d750 into elastic:master Aug 31, 2015
@s1monw s1monw added the v2.1.0 label Aug 31, 2015
@s1monw s1monw deleted the shard_path_allocation_decisions branch August 31, 2015 15:27
@lcawl lcawl added :Distributed/Distributed A catch all label for anything in the Distributed Area. If you aren't sure, use this one. and removed :Allocation labels Feb 13, 2018
@clintongormley clintongormley added :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) and removed :Distributed/Distributed A catch all label for anything in the Distributed Area. If you aren't sure, use this one. labels Feb 14, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) v2.0.0-beta2
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants