Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raftstore: gc abnormal snapshots and destroy peer if failed to apply snapshots. #16992

Merged
merged 27 commits into from
Jun 14, 2024

Conversation

LykxSassinator
Copy link
Contributor

@LykxSassinator LykxSassinator commented May 10, 2024

What is changed and how it works?

Issue Number: Close #15292

What's Changed:

Previously, there were pending tasks to address the scenario where TiKV would panic if applying snapshots failed due to abnormal conditions such as IO errors or unexpected issues.

This pull request resolves the issue by introducing additional traits tombstone: bool to SnapshotApplied, indicating whether the failure occurred due to abnormal snapshots.
Additionally, the region_id of the abnormal peer will be recorded into the newly added StoreMeta::damaged_regions, used to remove the associated peer.
Finally, this peer will be destroyed later through the StoreHeartbeat to PD, which will create a remove peer operator to remove the peer safely.

Replace `SnapshotApplied` with `SnapshotApplied { peer_id: u64, tombstone: bool}`. And if `tombstone` == true, the
relative peer will be automatically GCed.

Related changes

  • PR to update pingcap/docs/pingcap/docs-cn:
  • Need to cherry-pick to the release branch

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Release note

None.

…a snapshot.

Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Copy link
Contributor

ti-chi-bot bot commented May 10, 2024

[REVIEW NOTIFICATION]

This pull request has not been approved.

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by submitting an approval review.
Reviewer can cancel approval by submitting a request changes review.

Copy link
Contributor

ti-chi-bot bot commented May 10, 2024

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@glorv
Copy link
Contributor

glorv commented May 14, 2024

I have 2 questions about this PR:

  1. Is it possible to directly retry applying the snapshot if the failure is caused by unexpected(maybe some IO layer) error?
  2. Why need to tombstone the peer after applying snapshot failed? Why not just switch the peer status to normal and so the leader should start a new snapshot automatically.

@LykxSassinator
Copy link
Contributor Author

I have 2 questions about this PR:

  1. Is it possible to directly retry applying the snapshot if the failure is caused by unexpected(maybe some IO layer) error?
  2. Why need to tombstone the peer after applying snapshot failed? Why not just switch the peer status to normal and so the leader should start a new snapshot automatically.
  1. Nope, it's caused by loading abnormal blocks. According to the logs in the issue tikv panic repeatedly after this tikv recover from io hang #15292 and tikv panic repeatedly with “\"[region 16697056] 19604003 applying snapshot failed\"” after down this tikv for 20mins and recover #16958, it shows that the given snapshot has abnormal file blocks, causing the failure of the applying.
  2. For safety. Tombstone the peer and destroy it will assure that this TiKV node do not have any other remains data / meta data about this peer.

@LykxSassinator
Copy link
Contributor Author

LykxSassinator commented May 14, 2024

By the way, as for the point 2, I agree what u mentioned before. But for safety, this pr takes current implementation.

Why not just switch the peer status to normal and so the leader should start a new snapshot automatically.

I'll have a try for the point 2 u mentioned and do some extra tests on it to find the more appropriate way.

@overvenus
Copy link
Member

I don't think we can directly mark it as Tombstone or Normal, because both options violate the current raft state machine protocol.

  • Tombstone indicates that the peer has been fully removed, including its Raft membership and data. However, in this case, the peer remains a valid member.
  • Normal means that the peer has all data up to its last commit index. But, in this case, it does not, as its last commit index has been updated to the snapshot index.

There are two ways to fix the panic:

  1. Have PD remove this peer via a confchange, which I believe is the simplest solution.
  2. Introduce a new RPC to instruct the leader to resend the snapshot, which may change lots of code.

@glorv
Copy link
Contributor

glorv commented May 14, 2024

Introduce a new RPC to instruct the leader to resend the snapshot, which may change lots of code.

Why need this extra RPC? At the leader side, it will switch the peer's state to normal after finishing send the snapshot. At the follower side, when apply snapshot failed, it is also doable to restore the raft state to its previous state before this snapshot. Thus, If I understand correctly, the leader should start another snapshot without any extra operation?

@overvenus
Copy link
Member

overvenus commented May 14, 2024

At the follower side, when apply snapshot failed, it is also doable to restore the raft state to its previous state before this snapshot.

Do you mean persisting the previous state so it can be restored even after restarting TiKV?

That's doable (without introducing a new RPC), but it does add extra complexity to raftstore (and we'll need to review every code path related to snapshot handling).

Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
@LykxSassinator
Copy link
Contributor Author

This pr needs extra support from PD side to make the implementation more valid.

Hold until tikv/pd#8266 is merged.

Copy link
Member

@Connor1996 Connor1996 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this logic for backward compatibility as we can handle abnormal snapshots now
image

Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Copy link
Member

@overvenus overvenus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest LGTM

components/raftstore/src/store/fsm/peer.rs Show resolved Hide resolved
tests/failpoints/cases/test_pending_peers.rs Outdated Show resolved Hide resolved
Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Copy link
Contributor

@glorv glorv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rest LGTM

tests/failpoints/cases/test_pending_peers.rs Outdated Show resolved Hide resolved
Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Signed-off-by: lucasliang <nkcs_lykx@hotmail.com>
Copy link
Contributor

@glorv glorv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-chi-bot ti-chi-bot bot added the lgtm label Jun 14, 2024
Copy link
Contributor

ti-chi-bot bot commented Jun 14, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: glorv, overvenus

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

ti-chi-bot bot commented Jun 14, 2024

[LGTM Timeline notifier]

Timeline:

  • 2024-06-13 11:58:26.309511355 +0000 UTC m=+638660.362823275: ☑️ agreed by glorv.
  • 2024-06-14 03:14:58.239586396 +0000 UTC m=+693652.292898321: ☑️ agreed by overvenus.

@ti-chi-bot ti-chi-bot bot merged commit dd37a47 into tikv:master Jun 14, 2024
8 checks passed
@ti-chi-bot ti-chi-bot bot added this to the Pool milestone Jun 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

tikv panic repeatedly after this tikv recover from io hang
5 participants