Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mds: enqueue ~mdsdir at the time of enqueing root #51539

Merged
merged 5 commits into from Jul 18, 2023

Conversation

dparmar18
Copy link
Contributor

@dparmar18 dparmar18 commented May 17, 2023

This would avoid the need to run individual scrubs for
~mdsdir and root, i.e. run both the scrubs under the
same header, this also helps to avoid edge case where
in case ~mdsdir is huge and it's taking time to scrub it,
the scrub status would report something like this until
root inodes kick in:

{
    "status": "scrub active (757 inodes in the stack)",
    "scrubs": {}
}

Fixes: https://tracker.ceph.com/issues/59350

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

@github-actions github-actions bot added the cephfs Ceph File System label May 17, 2023
@dparmar18 dparmar18 force-pushed the wip-59350 branch 4 times, most recently from 89a2ba6 to a1da052 Compare May 18, 2023 11:00
@dparmar18 dparmar18 changed the title mds: [DNM] debug scrub status mds: do not enqueue mdsdir scrub at root(with op scrub_mdsdir) May 18, 2023
@dparmar18 dparmar18 marked this pull request as ready for review May 18, 2023 11:00
@dparmar18 dparmar18 requested review from vshankar and a team May 18, 2023 11:00
@dparmar18
Copy link
Contributor Author

If in case there are thousands of inodes to scrub in mdsdir, it is going to take some time for the scrub at root to end up in scrubbing_map i.e. scrub status would show something like this initially

{
    "status": "scrub active (757 inodes in the stack)",
    "scrubs": {}
}

until scrub for root kicks in, im currently trying to figure out a way such that the scrub status is not empty even at time of scrubbing mdsdir at root.

@dparmar18 dparmar18 marked this pull request as draft May 18, 2023 13:35
@dparmar18 dparmar18 force-pushed the wip-59350 branch 6 times, most recently from 359d04a to 0ae6ef3 Compare May 22, 2023 11:46
@dparmar18 dparmar18 force-pushed the wip-59350 branch 4 times, most recently from 8c62cad to f9ebf34 Compare May 22, 2023 12:17
@dparmar18 dparmar18 changed the title mds: do not enqueue mdsdir scrub at root(with op scrub_mdsdir) mds: enqueue ~mdsdir at the time of enqueing root May 22, 2023
@dparmar18 dparmar18 marked this pull request as ready for review May 22, 2023 12:19
@dparmar18 dparmar18 requested a review from a team as a code owner May 22, 2023 12:19
@dparmar18
Copy link
Contributor Author

Just FYI, this PR is an enhancement of #47649

@dparmar18
Copy link
Contributor Author

working as expected, run all green(except one failure where job failed due to some warning but tests went fine) - http://pulpito.front.sepia.ceph.com/dparmar-2023-05-23_00:32:04-fs:functional-wip-59350-distro-default-smithi/

Copy link
Contributor

@vshankar vshankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise LGTM.

src/mds/ScrubStack.cc Outdated Show resolved Hide resolved
This would avoid the need to run individual scrubs for
~mdsdir and root, i.e. run both the scrubs under the
same header, this also helps to avoid edge case where
in case ~mdsdir is huge and it's taking time to scrub it,
the scrub status would report something like this until
root inodes kick in:

{
    "status": "scrub active (757 inodes in the stack)",
    "scrubs": {}
}

Fixes: https://tracker.ceph.com/issues/59350
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
Previouly, two individual scrubs were initiated to scrub ~mdsdir
at root where the ~mdsdir scrub wasn't provided any tag thus, it
was necessary to not dump it's values for output of 'scrub start'.
Now since mdsdir and root scrub run under single header, there is
no need for this anymore, thus removing this redundant code.

Fixes: https://tracker.ceph.com/issues/59350
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
Previously when ~mdsdir was scrubbed at CephFS root, it's header
was kept empty, thus it became necessary to not dump it's values
for 'scrub status'. Now since both the scrubs(~mdsdir and root)
run under the same header, this code is no more needed.

Fixes: https://tracker.ceph.com/issues/59350
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
Code has been changed, in order to scrub ~mdsdir at root,
recursive flag also needs to be provided along with
scrub_mdsdir.

Fixes: https://tracker.ceph.com/issues/59350
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
for mdsdir scrub at CephFS root.

Fixes: https://tracker.ceph.com/issues/59350
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
Copy link
Contributor

@vshankar vshankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@vshankar vshankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
2 participants