Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use global checkpoint as starting seq in ops-based recovery #43463

Conversation

dnhatn
Copy link
Member

@dnhatn dnhatn commented Jun 21, 2019

Today we use the local checkpoint of the safe commit on replicas to determine whether we can perform a sequence number based recovery. While this is a good choice due to its simplicity, it replies on flushing which should not happen frequently.

This change increases the chance of sequence number based recoveries by using the global checkpoint on the target as the starting sequence number when possible.

@dnhatn dnhatn added >enhancement WIP :Distributed/Recovery Anything around constructing a new shard, either from a local or a remote source. v8.0.0 v7.3.0 labels Jun 21, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

@dnhatn
Copy link
Member Author

dnhatn commented Jun 21, 2019

This PR is still WIP but I opened this to get your feedback on the approach.
@ywelsch @henningandersen @DaveCTurner Could you please have a look when you have some cycles? Thank you!

Copy link
Contributor

@henningandersen henningandersen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dnhatn , I left a few initial comments.

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for picking this up. I've left some preliminary comments.

@dnhatn dnhatn changed the base branch from master to peer-recovery-retention-leases June 26, 2019 18:00
@dnhatn
Copy link
Member Author

dnhatn commented Jun 27, 2019

Talked to Yannick on another channel, we preferred to make this change altogether with the peer recovery retention leases work. Therefore, this PR will go to the feature branch (peer-recovery-retention-leases).

@henningandersen @ywelsch @DaveCTurner This is ready for another round. Can you please take another look? Thank you!

@ywelsch
Copy link
Contributor

ywelsch commented Jun 27, 2019

@dnhatn there is a relevant test failure here I think

@dnhatn dnhatn requested a review from ywelsch July 22, 2019 14:39
Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dnhatn
Copy link
Member Author

dnhatn commented Jul 23, 2019

run elasticsearch-ci/packaging-sample

@dnhatn
Copy link
Member Author

dnhatn commented Jul 23, 2019

@ywelsch @henningandersen @DaveCTurner Thank you for reviewing. Yannick, sorry for many iterations in this PR. I should have done better here.

@dnhatn dnhatn merged commit d15684d into elastic:peer-recovery-retention-leases Jul 23, 2019
@dnhatn dnhatn deleted the recover-to-global-checkpoint branch July 23, 2019 16:47
dnhatn added a commit that referenced this pull request Jul 23, 2019
Today we use the local checkpoint of the safe commit on replicas as the
starting sequence number of operation-based peer recovery. While this is
a good choice due to its simplicity, we need to share this information
between copies if we use retention leases in peer recovery. We can avoid
this extra work if we use the global checkpoint as the starting sequence
number.

With this change, we will try to recover replica locally up to the
global checkpoint before performing peer recovery. This commit should
also increase the chance of operation-based recovery.
dnhatn added a commit that referenced this pull request Jul 23, 2019
dnhatn added a commit that referenced this pull request Jul 24, 2019
… step (#44781)

If we force allocate an empty or stale primary, the global checkpoint on
replicas might be higher than the primary's as the local recovery step
(introduced in #43463) loads the previous (stale) global checkpoint into
ReplicationTracker. There's no issue with the retention leases for a new
lease with a higher term will supersede the stale one.

Relates #43463
dnhatn added a commit that referenced this pull request Jul 24, 2019
… step (#44781)

If we force allocate an empty or stale primary, the global checkpoint on
replicas might be higher than the primary's as the local recovery step
(introduced in #43463) loads the previous (stale) global checkpoint into
ReplicationTracker. There's no issue with the retention leases for a new
lease with a higher term will supersede the stale one.

Relates #43463
dnhatn added a commit that referenced this pull request Jul 30, 2019
For closed and frozen indices, we should not recover shard locally up to
the global checkpoint before performing peer recovery for that copy
might be offline when the index was closed/frozen.

Relates #43463
Closes #44855
dnhatn added a commit that referenced this pull request Jul 30, 2019
For closed and frozen indices, we should not recover shard locally up to
the global checkpoint before performing peer recovery for that copy
might be offline when the index was closed/frozen.

Relates #43463
Closes #44855
dnhatn added a commit that referenced this pull request Aug 1, 2019
Previously, if the metadata snapshot is empty (either no commit found or
error), we won't compute the starting sequence number and use -2 to opt
out the operation-based recovery. With #43463, we have a starting
sequence number before reading the last commit. Thus, we need to reset
it if we fail to snapshot the store.

Closes #45072
dnhatn added a commit that referenced this pull request Aug 1, 2019
Previously, if the metadata snapshot is empty (either no commit found or
error), we won't compute the starting sequence number and use -2 to opt
out the operation-based recovery. With #43463, we have a starting
sequence number before reading the last commit. Thus, we need to reset
it if we fail to snapshot the store.

Closes #45072
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed/Recovery Anything around constructing a new shard, either from a local or a remote source. >enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants