Skip to content

Commit

Permalink
Make shard balancing deterministic if weights are identical
Browse files Browse the repository at this point in the history
It happens to be the case that the iteration order of a HashMaps
keyset might be different across runs. This can cause undeterministic
results in shard balancing if weights are identical and multiple shards
of the same index are eligable for relocation. This commit adds
a tie-breaker based on the shard ID to prioritise the lowest shard
ID. This also makes `AddIncrementallyTests#testAddNodesAndIndices`
reproducible.

Closes #4867
  • Loading branch information
s1monw committed Jan 23, 2014
1 parent 3158776 commit 592a411
Showing 1 changed file with 4 additions and 1 deletion.
Expand Up @@ -804,7 +804,10 @@ private boolean tryRelocateShard(Operation operation, ModelNode minNode, ModelNo
if ((srcDecision = maxNode.removeShard(shard)) != null) {
minNode.addShard(shard, srcDecision);
final float delta = weight.weight(operation, this, minNode, idx) - weight.weight(operation, this, maxNode, idx);
if (delta < minCost) {
if (delta < minCost ||
(candidate != null && delta == minCost && candidate.id() > shard.id())) {
/* this last line is a tie-breaker to make the shard allocation alg deterministic
* otherwise we rely on the iteration order of the index.getAllShards() which is a set.*/
minCost = delta;
candidate = shard;
decision = new Decision.Multi().add(allocationDecision).add(rebalanceDecision);
Expand Down

0 comments on commit 592a411

Please sign in to comment.