Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc/cephfs: repairing inaccessible FSes #51359

Conversation

zdover23
Copy link
Contributor

@zdover23 zdover23 commented May 5, 2023

Add a procedure to doc/cephfs/troubleshooting.rst that explains how to restore access to FileSystems that became inaccessible after post-Nautilus upgrades. The procedure included here was written by Harry G Coin, and merely lightly edited by me. I include him here as a "co-author", but it should be noted that he did the heavy lifting on this.

See the email thread here for more context:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/HS5FD3QFR77NAKJ43M2T5ZC25UYXFLNW/

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

@zdover23 zdover23 requested a review from a team as a code owner May 5, 2023 06:39
@github-actions github-actions bot added cephfs Ceph File System documentation labels May 5, 2023
@zdover23 zdover23 requested a review from vshankar May 5, 2023 06:45
Copy link
Contributor

@anthonyeleven anthonyeleven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few suggestions inline, plus a note that one might prima fascie think it shouldn't be backported, but I suggest that doing so will help people avoid the situation in the first place.

doc/cephfs/troubleshooting.rst Outdated Show resolved Hide resolved
doc/cephfs/troubleshooting.rst Outdated Show resolved Hide resolved
doc/cephfs/troubleshooting.rst Outdated Show resolved Hide resolved
@anthonyeleven
Copy link
Contributor

Might we also want to add words to the effect that performing this step before such an upgrade is advisable to prevent the problem in the first place?

@zdover23
Copy link
Contributor Author

zdover23 commented May 5, 2023

A few suggestions inline, plus a note that one might prima fascie think it shouldn't be backported, but I suggest that doing so will help people avoid the situation in the first place.

I'm considering backporting this to all of the branches from Nautilus to Reef inclusive.

@zdover23
Copy link
Contributor Author

zdover23 commented May 5, 2023

Might we also want to add words to the effect that performing this step before such an upgrade is advisable to prevent the problem in the first place?

I think that the existence of this PR shows that the whole CephFS documentation suite post-Nautilus should be reviewed with the goal of at the very least using documentation to fix this compatibility-breaking... uh... unmasked regression. However, it'll be a few months before we get to the CephFS docs review given our current manpower. So, in short: yeah. I'll add a note.

Add a procedure to doc/cephfs/troubleshooting.rst that explains how to
restore access to FileSystems that became inaccessible after
post-Nautilus upgrades. The procedure included here was written by Harry
G Coin, and merely lightly edited by me. I include him here as a
"co-author", but it should be noted that he did the heavy lifting on
this.

See the email thread here for more context:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/HS5FD3QFR77NAKJ43M2T5ZC25UYXFLNW/

Co-authored-by: Harry G Coin <hgcoin@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
@zdover23 zdover23 force-pushed the wip-doc-2023-05-05-cephfs-troubleshooting-post-upgrade-inaccessible-filesystems branch from 890507f to 2430127 Compare May 5, 2023 23:40
@anthonyeleven anthonyeleven merged commit 55de546 into ceph:main May 6, 2023
11 checks passed
@zdover23
Copy link
Contributor Author

zdover23 commented May 6, 2023

#51371 - Reef backport
#51372 - Quincy backport
#51373 - Pacific backport
#51374 - Octopus backport
#51375 - Nautilus backport

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cephfs Ceph File System documentation
Projects
None yet
2 participants