Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicate writes only to fully initialized shards #28049

Merged
merged 6 commits into from Feb 2, 2018

Conversation

@ywelsch
Copy link
Contributor

ywelsch commented Jan 2, 2018

The primary currently replicates writes to all other shard copies as soon as they're added to the routing table. Initially those shards are not even ready yet to receive these replication requests, for example when undergoing a file-based peer recovery. Based on the specific stage that the shard copies are in, they will throw different kinds of exceptions when they receive the replication requests. The primary then ignores responses from shards that match certain exception types. With this mechanism it's not possible for a primary to distinguish between a situation where a replication target shard is not allocated and ready yet to receive requests and a situation where the shard was successfully allocated and active but subsequently failed.
This PR changes replication so that only initializing shards that have successfully opened their engine are used as replication targets. This removes the need to replicate requests to initializing shards that are not even ready yet to receive those requests. This saves on network bandwidth and enables features that rely on the distinction between a "not-yet-ready" shard and a failed shard.

@dnhatn
dnhatn approved these changes Jan 2, 2018
Copy link
Contributor

dnhatn left a comment

LGTM.

core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java Outdated
* make sure to do this before sampling the max sequence number in the next step, to ensure that we send
* all documents up to maxSeqNo in phase2.
*/
runUnderPrimaryPermit(() -> shard.initiateTracking(request.targetAllocationId()));

This comment has been minimized.

Copy link
@dnhatn

dnhatn Jan 2, 2018

Contributor

++

@ywelsch ywelsch requested a review from bleskes Jan 9, 2018
@colings86 colings86 added v6.3.0 and removed v6.2.0 labels Jan 22, 2018
@bleskes
bleskes approved these changes Feb 1, 2018
Copy link
Member

bleskes left a comment

This is great.

As follow up I think we should look at the tests as they became a bit messy over time in the sense that the ReplicationOperationTests test logic that is now bundled into other components. For example - it checks that we replicate to the right shards based on the shard table. Instead we should make sure that the replication group reflects the shard table it got and that replication operation just does whatever replication group says (regardless of a shard routing table).

core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java Outdated
continue;
}
final ReplicationGroup replicationGroup) {
totalShards.addAndGet(replicationGroup.getSkippedShards().size());

This comment has been minimized.

Copy link
@bleskes

bleskes Feb 1, 2018

Member

can we add a comment as why we're doing this? (I know it wasn't there before, I think it will help people read this class without having to know about the logic in the recovery source handler)

core/src/main/java/org/elasticsearch/index/seqno/ReplicationTracker.java Outdated
if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
this.tracked = in.readBoolean();
} else {
this.tracked = inSync;

This comment has been minimized.

Copy link
@bleskes

bleskes Feb 1, 2018

Member

can you add a comment as to why this is a good fallback? I agree it is but it's not a trivial decision

core/src/main/java/org/elasticsearch/index/shard/ReplicationGroup.java Outdated
if (trackedAllocationIds.contains(shard.allocationId().getId())) {
replicationTargets.add(shard);
} else {
skippedShards.add(shard);

This comment has been minimized.

Copy link
@bleskes

bleskes Feb 1, 2018

Member

can we assert this one is not in sync?

core/src/main/java/org/elasticsearch/index/shard/ReplicationGroup.java Outdated
if (trackedAllocationIds.contains(relocationTarget.allocationId().getId())) {
replicationTargets.add(relocationTarget);
} else {
skippedShards.add(relocationTarget);

This comment has been minimized.

Copy link
@bleskes

bleskes Feb 1, 2018

Member

same assertion

core/src/main/java/org/elasticsearch/index/shard/ReplicationGroup.java Outdated
}

/**
* Returns the subset of shards in the routing table that are unassigned or not required to replicate to. Includes relocation targets.

This comment has been minimized.

Copy link
@bleskes

bleskes Feb 1, 2018

Member

can you please add a concrete description of what these shards? i.e., unassigned shards or initializing shards that are who are still being built and are not ready to receive operations.

ywelsch added 5 commits Dec 29, 2017
@ywelsch ywelsch force-pushed the ywelsch:replicate-only-after-opening-engine branch to 9d114e8 Feb 1, 2018
ywelsch added a commit that referenced this pull request Feb 2, 2018
The primary currently replicates writes to all other shard copies as soon as they're added to the routing table.
Initially those shards are not even ready yet to receive these replication requests, for example when undergoing a
file-based peer recovery. Based on the specific stage that the shard copies are in, they will throw different kinds of
exceptions when they receive the replication requests. The primary then ignores responses from shards that match
certain exception types. With this mechanism it's not possible for a primary to distinguish between a situation where a
replication target shard is not allocated and ready yet to receive requests and a situation where the shard was
successfully allocated and active but subsequently failed.
This commit changes replication so that only initializing shards that have successfully opened their engine are used as
replication targets. This removes the need to replicate requests to initializing shards that are not even ready yet to
receive those requests. This saves on network bandwidth and enables features that rely on the distinction between a
"not-yet-ready" shard and a failed shard.
@ywelsch ywelsch merged commit 031415a into elastic:master Feb 2, 2018
2 checks passed
2 checks passed
CLA Commit author is a member of Elasticsearch
Details
elasticsearch-ci Build finished.
Details
@ywelsch

This comment has been minimized.

Copy link
Contributor Author

ywelsch commented Feb 2, 2018

As follow up I think we should look at the tests as they became a bit messy over time in the sense that the ReplicationOperationTests test logic that is now bundled into other components. For example - it checks that we replicate to the right shards based on the shard table. Instead we should make sure that the replication group reflects the shard table it got and that replication operation just does whatever replication group says (regardless of a shard routing table).

makes sense. I'll look into it

dnhatn added a commit to dnhatn/elasticsearch that referenced this pull request Feb 8, 2018
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since elastic#28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.

There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.

This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.
dnhatn added a commit that referenced this pull request Feb 8, 2018
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since #28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.

There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.

This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.

Relates #28049
Relates #28534
dnhatn added a commit that referenced this pull request Feb 17, 2018
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since #28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.

There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.

This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.

Relates #28049
Relates #28534
dnhatn added a commit that referenced this pull request Mar 9, 2018
Today, failures from the primary-replica resync are ignored as the best 
effort to not mark shards as stale during the cluster restart. However
this can be problematic if replicas failed to execute resync operations
but just fine in the subsequent write operations. When this happens,
replica will miss some operations from the new primary. There are some
implications if the local checkpoint on replica can't advance because of
the missing operations.

1. The global checkpoint won't advance - this causes both primary and 
replicas keep many index commits

2. Engine on replica won't flush periodically because uncommitted stats
is calculated based on the local checkpoint

3. Replica can use a large number of bitsets to keep track operations seqno

However we can prevent this issue but still reserve the best-effort by 
failing replicas which fail to execute resync operations but not mark
them as stale. We have prepared to the required infrastructure in #28049
and #28054 for this change.

Relates #24841
dnhatn added a commit that referenced this pull request Mar 10, 2018
Today, failures from the primary-replica resync are ignored as the best 
effort to not mark shards as stale during the cluster restart. However
this can be problematic if replicas failed to execute resync operations
but just fine in the subsequent write operations. When this happens,
replica will miss some operations from the new primary. There are some
implications if the local checkpoint on replica can't advance because of
the missing operations.

1. The global checkpoint won't advance - this causes both primary and 
replicas keep many index commits

2. Engine on replica won't flush periodically because uncommitted stats
is calculated based on the local checkpoint

3. Replica can use a large number of bitsets to keep track operations seqno

However we can prevent this issue but still reserve the best-effort by 
failing replicas which fail to execute resync operations but not mark
them as stale. We have prepared to the required infrastructure in #28049
and #28054 for this change.

Relates #24841
sebasjm pushed a commit to sebasjm/elasticsearch that referenced this pull request Mar 10, 2018
Today, failures from the primary-replica resync are ignored as the best 
effort to not mark shards as stale during the cluster restart. However
this can be problematic if replicas failed to execute resync operations
but just fine in the subsequent write operations. When this happens,
replica will miss some operations from the new primary. There are some
implications if the local checkpoint on replica can't advance because of
the missing operations.

1. The global checkpoint won't advance - this causes both primary and 
replicas keep many index commits

2. Engine on replica won't flush periodically because uncommitted stats
is calculated based on the local checkpoint

3. Replica can use a large number of bitsets to keep track operations seqno

However we can prevent this issue but still reserve the best-effort by 
failing replicas which fail to execute resync operations but not mark
them as stale. We have prepared to the required infrastructure in elastic#28049
and elastic#28054 for this change.

Relates elastic#24841
dnhatn added a commit to dnhatn/elasticsearch that referenced this pull request Apr 27, 2018
Since elastic#28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In elastic#28571, we
started strictly handling shard not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior to log warn only if an exception is not a shard
not-available exception.
dnhatn added a commit that referenced this pull request Apr 27, 2018
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571
dnhatn added a commit that referenced this pull request Apr 27, 2018
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571
dnhatn added a commit that referenced this pull request Apr 27, 2018
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571
@jimczi jimczi added v7.0.0-beta1 and removed v7.0.0 labels Feb 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
7 participants
You can’t perform that action at this time.