Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicate writes only to fully initialized shards #28049

Merged
merged 6 commits into from Feb 2, 2018

Conversation

ywelsch
Copy link
Contributor

@ywelsch ywelsch commented Jan 2, 2018

The primary currently replicates writes to all other shard copies as soon as they're added to the routing table. Initially those shards are not even ready yet to receive these replication requests, for example when undergoing a file-based peer recovery. Based on the specific stage that the shard copies are in, they will throw different kinds of exceptions when they receive the replication requests. The primary then ignores responses from shards that match certain exception types. With this mechanism it's not possible for a primary to distinguish between a situation where a replication target shard is not allocated and ready yet to receive requests and a situation where the shard was successfully allocated and active but subsequently failed.
This PR changes replication so that only initializing shards that have successfully opened their engine are used as replication targets. This removes the need to replicate requests to initializing shards that are not even ready yet to receive those requests. This saves on network bandwidth and enables features that rely on the distinction between a "not-yet-ready" shard and a failed shard.

@ywelsch ywelsch added :Distributed/Recovery Anything around constructing a new shard, either from a local or a remote source. :Sequence IDs >enhancement v6.2.0 v7.0.0 labels Jan 2, 2018
Copy link
Member

@dnhatn dnhatn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

* make sure to do this before sampling the max sequence number in the next step, to ensure that we send
* all documents up to maxSeqNo in phase2.
*/
runUnderPrimaryPermit(() -> shard.initiateTracking(request.targetAllocationId()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++

@ywelsch ywelsch requested a review from bleskes January 9, 2018 16:18
@colings86 colings86 added v6.3.0 and removed v6.2.0 labels Jan 22, 2018
Copy link
Contributor

@bleskes bleskes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great.

As follow up I think we should look at the tests as they became a bit messy over time in the sense that the ReplicationOperationTests test logic that is now bundled into other components. For example - it checks that we replicate to the right shards based on the shard table. Instead we should make sure that the replication group reflects the shard table it got and that replication operation just does whatever replication group says (regardless of a shard routing table).

continue;
}
final ReplicationGroup replicationGroup) {
totalShards.addAndGet(replicationGroup.getSkippedShards().size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add a comment as why we're doing this? (I know it wasn't there before, I think it will help people read this class without having to know about the logic in the recovery source handler)

if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
this.tracked = in.readBoolean();
} else {
this.tracked = inSync;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a comment as to why this is a good fallback? I agree it is but it's not a trivial decision

if (trackedAllocationIds.contains(shard.allocationId().getId())) {
replicationTargets.add(shard);
} else {
skippedShards.add(shard);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we assert this one is not in sync?

if (trackedAllocationIds.contains(relocationTarget.allocationId().getId())) {
replicationTargets.add(relocationTarget);
} else {
skippedShards.add(relocationTarget);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same assertion

}

/**
* Returns the subset of shards in the routing table that are unassigned or not required to replicate to. Includes relocation targets.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please add a concrete description of what these shards? i.e., unassigned shards or initializing shards that are who are still being built and are not ready to receive operations.

@ywelsch ywelsch force-pushed the replicate-only-after-opening-engine branch from 6ad850f to 9d114e8 Compare February 1, 2018 17:59
ywelsch added a commit that referenced this pull request Feb 2, 2018
The primary currently replicates writes to all other shard copies as soon as they're added to the routing table.
Initially those shards are not even ready yet to receive these replication requests, for example when undergoing a
file-based peer recovery. Based on the specific stage that the shard copies are in, they will throw different kinds of
exceptions when they receive the replication requests. The primary then ignores responses from shards that match
certain exception types. With this mechanism it's not possible for a primary to distinguish between a situation where a
replication target shard is not allocated and ready yet to receive requests and a situation where the shard was
successfully allocated and active but subsequently failed.
This commit changes replication so that only initializing shards that have successfully opened their engine are used as
replication targets. This removes the need to replicate requests to initializing shards that are not even ready yet to
receive those requests. This saves on network bandwidth and enables features that rely on the distinction between a
"not-yet-ready" shard and a failed shard.
@ywelsch ywelsch merged commit 031415a into elastic:master Feb 2, 2018
@ywelsch
Copy link
Contributor Author

ywelsch commented Feb 2, 2018

As follow up I think we should look at the tests as they became a bit messy over time in the sense that the ReplicationOperationTests test logic that is now bundled into other components. For example - it checks that we replicate to the right shards based on the shard table. Instead we should make sure that the replication group reflects the shard table it got and that replication operation just does whatever replication group says (regardless of a shard routing table).

makes sense. I'll look into it

dnhatn added a commit to dnhatn/elasticsearch that referenced this pull request Feb 8, 2018
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since elastic#28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.

There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.

This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.
dnhatn added a commit that referenced this pull request Feb 8, 2018
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since #28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.

There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.

This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.

Relates #28049
Relates #28534
@clintongormley clintongormley added :Distributed/Engine Anything around managing Lucene and the Translog in an open shard. and removed :Sequence IDs labels Feb 14, 2018
dnhatn added a commit that referenced this pull request Feb 17, 2018
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since #28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.

There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.

This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.

Relates #28049
Relates #28534
dnhatn added a commit that referenced this pull request Mar 9, 2018
Today, failures from the primary-replica resync are ignored as the best 
effort to not mark shards as stale during the cluster restart. However
this can be problematic if replicas failed to execute resync operations
but just fine in the subsequent write operations. When this happens,
replica will miss some operations from the new primary. There are some
implications if the local checkpoint on replica can't advance because of
the missing operations.

1. The global checkpoint won't advance - this causes both primary and 
replicas keep many index commits

2. Engine on replica won't flush periodically because uncommitted stats
is calculated based on the local checkpoint

3. Replica can use a large number of bitsets to keep track operations seqno

However we can prevent this issue but still reserve the best-effort by 
failing replicas which fail to execute resync operations but not mark
them as stale. We have prepared to the required infrastructure in #28049
and #28054 for this change.

Relates #24841
dnhatn added a commit that referenced this pull request Mar 10, 2018
Today, failures from the primary-replica resync are ignored as the best 
effort to not mark shards as stale during the cluster restart. However
this can be problematic if replicas failed to execute resync operations
but just fine in the subsequent write operations. When this happens,
replica will miss some operations from the new primary. There are some
implications if the local checkpoint on replica can't advance because of
the missing operations.

1. The global checkpoint won't advance - this causes both primary and 
replicas keep many index commits

2. Engine on replica won't flush periodically because uncommitted stats
is calculated based on the local checkpoint

3. Replica can use a large number of bitsets to keep track operations seqno

However we can prevent this issue but still reserve the best-effort by 
failing replicas which fail to execute resync operations but not mark
them as stale. We have prepared to the required infrastructure in #28049
and #28054 for this change.

Relates #24841
sebasjm pushed a commit to sebasjm/elasticsearch that referenced this pull request Mar 10, 2018
Today, failures from the primary-replica resync are ignored as the best 
effort to not mark shards as stale during the cluster restart. However
this can be problematic if replicas failed to execute resync operations
but just fine in the subsequent write operations. When this happens,
replica will miss some operations from the new primary. There are some
implications if the local checkpoint on replica can't advance because of
the missing operations.

1. The global checkpoint won't advance - this causes both primary and 
replicas keep many index commits

2. Engine on replica won't flush periodically because uncommitted stats
is calculated based on the local checkpoint

3. Replica can use a large number of bitsets to keep track operations seqno

However we can prevent this issue but still reserve the best-effort by 
failing replicas which fail to execute resync operations but not mark
them as stale. We have prepared to the required infrastructure in elastic#28049
and elastic#28054 for this change.

Relates elastic#24841
dnhatn added a commit to dnhatn/elasticsearch that referenced this pull request Apr 27, 2018
Since elastic#28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In elastic#28571, we
started strictly handling shard not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior to log warn only if an exception is not a shard
not-available exception.
dnhatn added a commit that referenced this pull request Apr 27, 2018
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571
dnhatn added a commit that referenced this pull request Apr 27, 2018
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571
dnhatn added a commit that referenced this pull request Apr 27, 2018
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571
@jpountz jpountz removed the :Distributed/Engine Anything around managing Lucene and the Translog in an open shard. label Jan 29, 2019
@jimczi jimczi added v7.0.0-beta1 and removed v7.0.0 labels Feb 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed/Recovery Anything around constructing a new shard, either from a local or a remote source. >enhancement v6.3.0 v7.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants